Jump to content

interwebtech

Members
  • Content Count

    1155
  • Joined

  • Last visited

Everything posted by interwebtech

  1. thanks for taking a look at this. Notifications range from ""high on usage (90%)" to "low on space (92%)". All drives are 8TB. tower-diagnostics-20200331-1103.zip
  2. This is still happening twice a day. The notifications clearly are not obeying the drive settings, either global or per drive.
  3. bumped up to 128 GB Just ordered 8x Samsung 16GB Single DDR4 2133 MT/s (PC4 2133) CL15 ECC Registered DIMM M393A2G40DB0-CPB (128GB) to replace 8x 8GB Crucial RDIMMs (64GB). Cost me half as much to buy twice the RAM as when I built this box in 2018. Buy high, sell low it seems. lol
  4. I have found it necessary useful to first "spin up all drives" for anything that needs to check every drive before doing whatever (like diskpeed does). Even reboots are faster if you just spin them all up first. It avoids both possible time outs or errors waiting for drives to respond as well as contention between reading and spinning up multiple drives. ps. 15 drive array. may not be as much of an issue with fewer drives?
  5. [SOLD] Price includes PayPal fees and USPS flat rate box shipping (US only). It has been in my main laptop about 20 months.
  6. I have them set at the individual level to 95/97 as well. Is there a way to reset all to default so I can start fresh?
  7. For past couple months I have been being deluged twice a day with 10 (was 4 for a while, now back up to 10) push notifications that disks are over the threshold e.g., Subject: Warning [TOWER] - Disk 10 is high on usage (76%). (See pic). It was harmless if not annoying so I hadn't done anything other than tweak notification thresholds (no effect). Yesterday I was troubleshooting an ECC error in the logs (fixed by replacing OS USB stick... who'da thunk?) and decided to try and fix this as well. No luck so far so I'm back here to see if anyone has run across this too. I have thresholds set at 90% for warning, 95% for critical. I shouldn't be getting any notifications but for some reason it is sending them. Last night I tried resetting to default as many global disk settings as I could (see pic) Diagnostics attached but I don't see anything in there that looks suspicious. tower-diagnostics-20200113-1122.zip
  8. I've been troubleshooting erroneous disk utilization warning emails and found there is no way to visually determine what any particular disk's utilization status is. I would like to propose adding a column to the display grid (far right between Free & View) on the Main tab (Array Devices & Cache Devices sub tabs) with simply "%" for heading and the calculated percent full for each disk. Maybe go out a couple decimal places so you can spot one about to go over without it actually having to happen. Total at the bottom but frankly that already covered by the "Show array utilization indicator" doohickey.
  9. FYI you will need to have Plex server Network Settings | Secure connections set to Preferred to get this to work. Required fails to connect. I hammered on this for while before stumbling across the advise on Reddit.
  10. Wide variety of plug-ins, dockers, etc Folder Caching that freakin' works.
  11. [SOLD] [SOLD] [SOLD] Purchased the pair new 1 year ago. Excellent to like-new condition. I upgraded to pair of 2690 to double my passmark score. (2) Intel Xeon E5-2620 v3 Six-Core Haswell Processor 2.4GHz 8.0GT/s 15MB LGA 2011-3 CPU, OEM Asking $19.99 each which includes PayPal fees & USPS flat rate box shipping to US addresses. If you want it international, we will need to negotiate the cost.
  12. Sorry for using a screenshot but easiest way to quote from another forum (Plex forum) Link to TZ parameters: https://hub.docker.com/r/tautulli/tautulli/#parameters
  13. Seeing something new recently and don't understand what its trying to tell me. Thanks for any light you can cast on these. Diags attached. this: Plex Media Scan[16812]: segfault at 12e11 ip 000014843563d097 sp 0000148426c8d000 error 4 in libcrypto.so.1.0.0[14843552c000+204000] and this: nginx: 2019/09/22 00:00:04 [error] 10585#10585: *3256146 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" tower-diagnostics-20190923-1529.zip
  14. Just replaced the last 4 mismatched sized drives to 8 TB. All drives the same size now with 120 TB total.
  15. Ended up replacing 4 drives but one of them was likely the actual boat anchor on throughput... a retail boxed Seagate 6TB from 2014. First swap result in the middle run from 19th, a 4TB HSGT 7200 RPM drive. Still scratching head why what should be a fast drive benches so poorly. The aforementioned 6TB was removed prior to last check. Still not the best but huge improvement from 24+ hours to 17 for parity check. All drives are now 8TB. Date Duration Speed 2019-08-20, 03:06:36 16 hr, 43 min 133.0 MB/s 2019-08-19, 10:07:58 18 hr, 39 min, 26 sec 119.1 MB/s 2019-08-18, 15:05:02 1 day, 39 min, 11 sec 90.2 MB/s Will be running the tunables Long Test again as soon as time permits.
  16. Ran the Long Test overnight with Docker disabled. I have set my server to use the Fastest setting based on results. I was hoping for better throughput numbers. Do I need to get rid of all my shingled drives and run all 7200 RPM to get the scores I am seeing from others? Or is 89.2 MB/s the best I can expect? LongSyncTestReport_2019_08_14_2118.txt
  17. Unraid user since v4.7. Server has completely morphed 4 times over that time into its current state. Who knew that energy efficient Sempron 145 with 4 TB (5x 1 TB drives including parity) Plex server from back then would grow to the 108 TB dual Xeon firebreather I have now. Happy Birthday LimeTech & UNRAID!
  18. I currently have the LSI Logic LSI00244 SAS 9201-16i 16-Port 6Gb/s SAS/SATA Controller Card. I am looking at the LSI Logic 05-25703-00 SAS 9305-16i 16-Port 12Gb/s SAS/SATA Controller Card as a possible upgrade. I have all 16 ports occupied up with a mixture of Seagate, WD & HGST 8TB drives (2x 6TB too) consisting of dual 8TB parity and 15 data drives. Will I see any improvement in throughput speed (especially parity checks) with the newer card?
  19. cache_dirs appears to not be working properly with the latest 6.7 stable. Opening a cached share in windows exploder almost always lights up a drive on the array which delays the display of folders until its spun up. I've tried adjusting "Cache pressure of directories on system" from 10 to 1 but no change in behavior. logs attached tower-diagnostics-20190523-1807.zip
  20. StarTech.com 12U Adjustable Depth Open Frame 4 Post Server Rack with Casters/Levelers and Cable Management Hooks 4POSTRACK12U Black 2x StarTech.com 1U Adjustable Mounting Depth Vented Rack Mount Shelf CyberPower OR1500LCDRM1U 1U Rackmount UPS System NORCO 4U Rack Mount 24 x Hot-Swappable SATA/SAS 6G Drive Bays Server Rack mount RPC-4224 EVGA Supernova 850 G3, 80 Plus Gold 850W Modular Power Supply 220-G3-0850-X1 ASRock EP2C612 WS Motherboard 2x Intel Xeon E5-2620 v3 Six-Core Haswell Processor 2.4GHz LGA 2011-3 CPU 2x Intel Xeon E5-2690 v3 12-Core Haswell Processor 2.6GHz LGA-2011-3 CPU (11/2019) 2x Intel LGA 2011-3 Cooling Fan/Heatsink 8x Crucial 8GB Single DDR4 2133 MT/s (PC4-2133) CL15 SR x4 ECC Registered DIMM CT8G4RFS4213 (64GB) 4x Samsung 970 EVO 1TB - NVMe PCIe M.2 2280 SSD (MZ-V7E1T0BW) 4x QNINE M.2 NVME SSD to PCIe adapter LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA Controller Card 1x NORCO Computer Parallel (reverse breakout) Cable (C-SFF8087-4S) 4x 10Gtek Internal Mini SAS SFF-8087 Cable, 0.5 Meter 2x Gigabit network adapters bonding to single interface Unraid OS Pro 6.x 4x 1TB RAID1 @2TB Cache Pool 2x 8TB parity 15x 8TB array @120TB
  21. I need to venture out of the Prereleases & General groups more often. First I heard of MERCH! Bought coffee mug, t-shirt & magnet. w00t!
  22. Oddly enough I saw no delay this time on syncing. I was Chicken Little on the last RC: "The install has hung! The install has hung!". lol