interwebtech

Members
  • Posts

    1209
  • Joined

  • Last visited

Everything posted by interwebtech

  1. Disregard the celebration. I just tried to open my TV share and it lit up one drive on the server (activity light) and didn't display anything until it had spun up. Going to try setting "Cache pressure of directories on system:" back to 1.
  2. I'm baaaack. I think I figured out my problem or at least my fix. I set "Minimum level depth (for adaptive depth)" to 2 from the default of 4. I also set "Cache pressure of directories on system" back to 10 from my previous 1 (I think the first one is the one that fixed it tho) Then I let the disks all spin down. Opened the share (from windows pc) with the largest number of folders in it (20k+) and it displayed instantly. It used to take for-freakin-ever to load them all up as it spun up each disk in the share. The thing that finally clicked for me (lightbulb moment) was in the help for "Minimum level depth": Sets the minimum folder level for the adaptive scan (user-share > child folder > grand child is two levels). Default is 4. For some reason I thought my depth was erm... deeper. My shares are pretty basic: Share (e.g., Movies) movie 1 movie 2 movie 3 etc. Question: was the problem related to it trying to cache the file names within the "movie n" folders. If so I have one share with 20K+ folders each with a file in it for 40K right there. I suspect not tho because the issue I had existed before that share grew so large.
  3. thanks for taking a look at this. Notifications range from ""high on usage (90%)" to "low on space (92%)". All drives are 8TB. tower-diagnostics-20200331-1103.zip
  4. This is still happening twice a day. The notifications clearly are not obeying the drive settings, either global or per drive.
  5. bumped up to 128 GB Just ordered 8x Samsung 16GB Single DDR4 2133 MT/s (PC4 2133) CL15 ECC Registered DIMM M393A2G40DB0-CPB (128GB) to replace 8x 8GB Crucial RDIMMs (64GB). Cost me half as much to buy twice the RAM as when I built this box in 2018. Buy high, sell low it seems. lol
  6. I have found it necessary useful to first "spin up all drives" for anything that needs to check every drive before doing whatever (like diskpeed does). Even reboots are faster if you just spin them all up first. It avoids both possible time outs or errors waiting for drives to respond as well as contention between reading and spinning up multiple drives. ps. 15 drive array. may not be as much of an issue with fewer drives?
  7. [SOLD] Price includes PayPal fees and USPS flat rate box shipping (US only). It has been in my main laptop about 20 months.
  8. I have them set at the individual level to 95/97 as well. Is there a way to reset all to default so I can start fresh?
  9. For past couple months I have been being deluged twice a day with 10 (was 4 for a while, now back up to 10) push notifications that disks are over the threshold e.g., Subject: Warning [TOWER] - Disk 10 is high on usage (76%). (See pic). It was harmless if not annoying so I hadn't done anything other than tweak notification thresholds (no effect). Yesterday I was troubleshooting an ECC error in the logs (fixed by replacing OS USB stick... who'da thunk?) and decided to try and fix this as well. No luck so far so I'm back here to see if anyone has run across this too. I have thresholds set at 90% for warning, 95% for critical. I shouldn't be getting any notifications but for some reason it is sending them. Last night I tried resetting to default as many global disk settings as I could (see pic) Diagnostics attached but I don't see anything in there that looks suspicious. tower-diagnostics-20200113-1122.zip
  10. I've been troubleshooting erroneous disk utilization warning emails and found there is no way to visually determine what any particular disk's utilization status is. I would like to propose adding a column to the display grid (far right between Free & View) on the Main tab (Array Devices & Cache Devices sub tabs) with simply "%" for heading and the calculated percent full for each disk. Maybe go out a couple decimal places so you can spot one about to go over without it actually having to happen. Total at the bottom but frankly that already covered by the "Show array utilization indicator" doohickey.
  11. FYI you will need to have Plex server Network Settings | Secure connections set to Preferred to get this to work. Required fails to connect. I hammered on this for while before stumbling across the advise on Reddit.
  12. Wide variety of plug-ins, dockers, etc Folder Caching that freakin' works.
  13. [SOLD] [SOLD] [SOLD] Purchased the pair new 1 year ago. Excellent to like-new condition. I upgraded to pair of 2690 to double my passmark score. (2) Intel Xeon E5-2620 v3 Six-Core Haswell Processor 2.4GHz 8.0GT/s 15MB LGA 2011-3 CPU, OEM Asking $19.99 each which includes PayPal fees & USPS flat rate box shipping to US addresses. If you want it international, we will need to negotiate the cost.
  14. Sorry for using a screenshot but easiest way to quote from another forum (Plex forum) Link to TZ parameters: https://hub.docker.com/r/tautulli/tautulli/#parameters
  15. Seeing something new recently and don't understand what its trying to tell me. Thanks for any light you can cast on these. Diags attached. this: Plex Media Scan[16812]: segfault at 12e11 ip 000014843563d097 sp 0000148426c8d000 error 4 in libcrypto.so.1.0.0[14843552c000+204000] and this: nginx: 2019/09/22 00:00:04 [error] 10585#10585: *3256146 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" tower-diagnostics-20190923-1529.zip
  16. Just replaced the last 4 mismatched sized drives to 8 TB. All drives the same size now with 120 TB total.
  17. Ended up replacing 4 drives but one of them was likely the actual boat anchor on throughput... a retail boxed Seagate 6TB from 2014. First swap result in the middle run from 19th, a 4TB HSGT 7200 RPM drive. Still scratching head why what should be a fast drive benches so poorly. The aforementioned 6TB was removed prior to last check. Still not the best but huge improvement from 24+ hours to 17 for parity check. All drives are now 8TB. Date Duration Speed 2019-08-20, 03:06:36 16 hr, 43 min 133.0 MB/s 2019-08-19, 10:07:58 18 hr, 39 min, 26 sec 119.1 MB/s 2019-08-18, 15:05:02 1 day, 39 min, 11 sec 90.2 MB/s Will be running the tunables Long Test again as soon as time permits.
  18. Ran the Long Test overnight with Docker disabled. I have set my server to use the Fastest setting based on results. I was hoping for better throughput numbers. Do I need to get rid of all my shingled drives and run all 7200 RPM to get the scores I am seeing from others? Or is 89.2 MB/s the best I can expect? LongSyncTestReport_2019_08_14_2118.txt
  19. Unraid user since v4.7. Server has completely morphed 4 times over that time into its current state. Who knew that energy efficient Sempron 145 with 4 TB (5x 1 TB drives including parity) Plex server from back then would grow to the 108 TB dual Xeon firebreather I have now. Happy Birthday LimeTech & UNRAID!
  20. I currently have the LSI Logic LSI00244 SAS 9201-16i 16-Port 6Gb/s SAS/SATA Controller Card. I am looking at the LSI Logic 05-25703-00 SAS 9305-16i 16-Port 12Gb/s SAS/SATA Controller Card as a possible upgrade. I have all 16 ports occupied up with a mixture of Seagate, WD & HGST 8TB drives (2x 6TB too) consisting of dual 8TB parity and 15 data drives. Will I see any improvement in throughput speed (especially parity checks) with the newer card?
  21. cache_dirs appears to not be working properly with the latest 6.7 stable. Opening a cached share in windows exploder almost always lights up a drive on the array which delays the display of folders until its spun up. I've tried adjusting "Cache pressure of directories on system" from 10 to 1 but no change in behavior. logs attached tower-diagnostics-20190523-1807.zip