jbartlett

Community Developer
  • Posts

    1896
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. I'm about to release another pre-alpha as a Docker but support for testing for weak sectors is still a bit off. However, I updated the post with the older bash script to include the modification done by bonienl so it'll work on unRAID 6.4 - link is in my tag.
  2. The monitor records events to a log file, you can try monitoring that. If you want to go that route, I can look into digging up a program I wrote that decoded the log file events.
  3. A straight read of the WD 8TB Red took 56626 seconds (15 hours 34 minutes) from end to end using an optimized block size for speed.
  4. Here's the speed graph for the WD 8TB Red. Ran the test multiple times including this one with a 3 scan avg every 5%
  5. One of two ST8000AS0002 8TB (1NA17Z) drives I ordered back in November 2015 is starting to fail with sector read errors. These drives are in my backup server and pretty much only written to with new files, files are almost never deleted or replaced (excluding preclear test). Noticed things last night when I tried to copy a file from it and it kept failing with a network error and started a read scan. My confidence in Seagate has been waning, got a WD 8TB Red arriving this evening to replace it. ID# ATTRIBUTE_NAME VALUE WORST THRESH FAIL RAW_VALUE 5 Reallocated_Sector_Ct 100 100 010 - 8 183 Runtime_Bad_Block 099 099 000 - 1 187 Reported_Uncorrect 065 065 000 - 35 189 High_Fly_Writes 098 098 000 - 2 197 Current_Pending_Sector 100 100 000 - 208 198 Offline_Uncorrectable 100 100 000 - 208
  6. A Windows Solution would be Beyond Compare by Scooter Software. You can select your source files on the left & destination on the right and then filter by year. It also has scripting capabilities. You can optionally hide/ignore movies/years which you don't care about or have already seen.
  7. It would be safe to say that at first let's try to get max drive read speeds within a small margin of error. Later, we'll work on trying to ink out every byte. I'll allow a manual override of block size to be entered if it is evenly divisible by the logical sector size.
  8. And we're not even touching the advanced format SAS drives that change their sector sizes based on how far out from the spindle the spot is.... Be interesting to see some benchmarks with one of those.
  9. I'm getting the value from "MaxSectorsPerRequest" using blockdev and from the drives I double-checked, it matched the value in the "queue" drive device directory. However, I'm now likewise a little suspect since all my drives are reporting the same value. Could be a coincidence but.... I have not been fully satisfied with how I was doing the tests by running a balls-to-the-wall read for 3 seconds to get a baseline scan and then 8 second tests of the three 3-second tests that had the highest results. But those three second tests kept showing a false spike towards the top end of the block sizes. The time it took to do a scan on a system with many drives just took too long and I know people will get impatient. I'm going to default to the MaxSectorsPerRequest value which gives a good baseline starting point but allow people to do an in-depth scan but implement a smarter method. Do a ten second read starting at the MaxSectorsPerRequest value and then check above & below it to see if above is equal (within a tiny margin of error) and the below is less. If the values show improvement with a bigger or smaller block size, adjust and rescan. Then optionally apply that same result for all drives with the same make/model/rev on the same controller. The option of testing individual drives will be there too.
  10. If you need something right now, build an unraid stick with 6.3.5 on it and boot from that to run the SH script.
  11. I created a test script to read 5 GB of data starting with a block size of 512 bytes and doubling it every pass which was the same logic I used in the app. Here's the log of how long it took and the average speed against my 6TB WD Red Pro 2 drive. The drive reported that 16K was the optimal block size. 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 446.494 s, 11.5 MB/s (512B) 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 229.114 s, 22.3 MB/s (1K) 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 116.944 s, 43.8 MB/s (2K) 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 62.9478 s, 81.3 MB/s (4K) 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 36.6456 s, 140 MB/s (8k) 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 31.7995 s, 161 MB/s (16K) 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 31.8183 s, 161 MB/s (32K) 5120000000 bytes (5.1 GB, 4.8 GiB) copied, 31.7485 s, 161 MB/s (64K) 5119934464 bytes (5.1 GB, 4.8 GiB) copied, 31.758 s, 161 MB/s (128K) 5119934464 bytes (5.1 GB, 4.8 GiB) copied, 31.7385 s, 161 MB/s (256K) 5119672320 bytes (5.1 GB, 4.8 GiB) copied, 31.7558 s, 161 MB/s (512K) 5119148032 bytes (5.1 GB, 4.8 GiB) copied, 31.7352 s, 161 MB/s (1M) 5119148032 bytes (5.1 GB, 4.8 GiB) copied, 31.758 s, 161 MB/s (2M) 5117050880 bytes (5.1 GB, 4.8 GiB) copied, 31.7343 s, 161 MB/s (4M) 5117050880 bytes (5.1 GB, 4.8 GiB) copied, 31.7281 s, 161 MB/s (8M) 5117050880 bytes (5.1 GB, 4.8 GiB) copied, 31.7283 s, 161 MB/s (16M) 5100273664 bytes (5.1 GB, 4.8 GiB) copied, 31.6791 s, 161 MB/s (32M)
  12. Looks like I can do away with the whole thing of reading the drives in incrementing block sizes to determine the optimum sector size. The drives return this information! Utilizing the max sectors per request value multiplied by the logical sector size yields the optimize block size.
  13. Oooo, I'm able to figure out the max data transfer speeds for storage controllers that report a link speed. I can show you on a graph how much data you're transferring per drive with the max possible throughput. So basically, I can give you a percentage of how much bandwidth you're using and how much you have free. Then you can decide for yourself how to best utilize that bandwidth and with what drives. SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] LSI Logic / Symbios Logic Serial Attached SCSI controller Current Link Speed: 5GT/s width x8 (4 GB/s max throughput) Maximum Link Speed: 5GT/s width x8 (4 GB/s max throughput) "Exciting stuff." -Cave Johnson
  14. From a development standpoint, here's the steps I had to take to deploy each version: Plugin: Check to see if new Java version is available. Update path reference in PLG file if so. Check to see if new Lucee version is available. Update path reference in PLG file if so and Lucee version so existing instances and just download the new lucee.jar file vs everything. Every so often, check to see if there's new versions of the utility packages (such as nvme-cli) and integrate Zip up the source code and place it on my web server Update PLG file to reference the new file name for the source file, update the version number of the plugin, then deploy updated PLG Update plugin to ensure everything updates correctly Uninstall & install plugin to insure everything installs correctly Docker: Copy directory with the code to my "dockerfile" directory. Build Docker Push Docker Any updates to any part such as Java, Lucee, packages are automatically bundled in with no effort on my part.
  15. Lucee is a free app server for the ColdFusion scripting language. unRAID only supports PHP natively. We're not talking just system with minimal RAM installed, many people (like myself) have systems with large amounts of RAM but mostly all allocated to VM's. And regardless of going with a Plugin or Docker, the same RAM allocation is going to take place as Lucee/Java would need to be installed & running. Going with the Docker route keeps it more tightly contained and the end user GUI experience is identical.
  16. Check out Beyond Compare by Scooter Software. You can create a batch file that'll instruct it to mirror a directory and then schedule that batch file to run via your PC's choice scheduler. It's very fast with it's copy and if you utilize FTP, it can perform the sync with x number of connections to maximize your connection.
  17. If it was only intended to be used under unRAID, this would likely be true but I've always wanted to eventually expand past unRAID and switching to Docker now while it's still in early development makes the most sense since I validated that I could access the hosts hardware from inside the container. And running under Docker is a good compromise for those with limited RAM availability as it uses already provided functionality for start/stopping. As a Plugin, I would need to provide an extra layer to start & stop the Lucee app server. I see the same. There's likely an extra pass-through going with Docker and if there is, any delays it introduces are negated by the latencies of the drive itself.
  18. Screen shots is your friend here. Every time I make a drive change assignment, I pull up the GUI on my iPad and take a screen shot.
  19. What you need is some kind of drive cluster that maps/connects the server and if the OS becomes unresponsive, it switches over to the other server/node. Then you just need to spin up those Dockers on the 2nd server and should be good to go. In theory.
  20. I've been experimenting with HEVC video with my setup and the potential for converting my video library from raw MKV's to it. One thing you have to keep in mind when testing the server's capacity for transcoding is the buffer size you have configured. If you have your buffer set to 5 minutes, it'll run balls-to-the-wall until it transcodes a five minute buffer for the end client before it starts to throttle back. So things could look like it can't handle the HEVC transcode when it's really just building up a buffer. I run PMS in a Windows VM as I've found it to be more stable than the Docker version. I've had my Plex datastore get corrupted twice under Docker forcing a complete rebuild (which was extremely annoying both times). Under the WinVM, I have Plex set up to store it's data in it's own virtual drive and I use True Image to make backups of it. I'm now looking to upgrade my NAS box to take advantage of the newer CPU's and hopefully multi-CPU for a high core count. The mad scientist in me wants to give all those cores to the WinVM and tell all of my remote family to watch a movie! (maniacal laughter) The balance point I'm working on now is clock speed vs core count - the more cores, the slower the CPU - I want the power but more cores can offset that need for power.
  21. My ultimate goal is to be able to branch it out past unRAID and onto Unix in general and hopefully Windows one day. My goal from the start was to minimize dependencies on unRAID and to derive all information from the OS itself. As things stand now, the only information it needs UNRAID for is drive slot assignment. Another one of my goals is to use this tool to gather information for a Hard Drive database since the "The HDD Platter Capacity Database" has gone offline. People will be able to submit their drive scans to a global database and compare their drive's performance with others. With a full drive scan giving a heat map of the drive's performance, determining the platter setup should be possible for drives that manufacturers don't provide any such information.