Jump to content

Pauven

Members
  • Posts

    747
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Pauven

  1. Oh, another thought/question: Now that we're on 64-bit unRaid, do I still need to worry about the "Best Bang for the Buck", or just focus on the "Unthrottled" fastest possible values? I had come up with some new metrics to try and derive the thriftiest values, but if memory is no longer a concern, perhaps that's just silly. -Paul
  2. That helps catch me up, thanks! I was going to ask about it, and nr_requests. I've got some ideas for preliminary testing of md_sync_thresh, so I'll add that into the mix and see how it goes. I was also going to ask if Tom had ever revealed how md_num_stripes vs md_sync_window works now that md_write_limit is gone. The script I'm currently testing I never released to the public, and it has Write testing. I had found that md_write_limit had a large impact on write speeds, and now that it is gone, I don't know what to think. I think I had a pretty solid understanding of md_num_stripes in 5.0, but now I'm lost. Any rules of thumb for setting num_stripes vs. sync_window? I also see a new tunable named md_write_method, with selectable values of read/modify/write, or reconstruct write. What in the world is that? -Paul
  3. That would be awesome, I appreciate the offer. Don't be surprised if I take you up on it. By the way, I've been meaning to ask: Has anyone created a plugin that tests these tunables? I didn't see one available, but thought I might have missed it if someone already created one. I'm still finding that I get a nice bell curve on my server, not a flat line like you get on yours. Only the peak of the curve has shifted to lower values. Considering how little attention this thread has seen in the past couple years, I'm thinking LimeTech's done something right to make this less of an issue. -Paul
  4. Hey Squid, So funny that you posted this today (much appreciated by the way). I've spent all day working on the next version, compatible with 6.2. Today is the first time I've touched it in nearly 3 years. Your fix for md_write_limit is necessary. Not sure about changing the location of the mdcmd. I was going to make that change today, but found that on 6.2 it works in the original location. Perhaps LimeTech did something to reverse what they changed in 6.0/6.1. I've found a few things in my testing with 6.2.0-RC4 today - last time I tested was on unRAID 5.0. The old values that worked so well on 5.0 don't seem to apply to 6.2. Significantly lower md_sync_window values are now working better, for my server at least. Of course, a lot has changed since 5.0 (32 to 64 bit, newer drivers, newer kernel, newer unRAID code, and on my server, 4GB to 16GB and now Dual Parity), any of which could be a factor in my server now responding better to different values than on unRAID 5.0. Also, in testing and trouble-shooting bugs, I accidentally ran a detailed Pass 2 in the wrong region - it should have run md_sync_window values in the 640-768 region, and instead it ran from 384 - 768. This was interesting because there was a hidden gem at 560, where it ran 1.5MB/s faster than any other setting, and at a much lower value than the 768 value I was targeting from Pass 1 (though the values around 768 more consistently produced fast results). Had my script run correctly, I would have never tested 560, and never found this hidden value with so much potential. 560 might just be an anomaly, I plan to do some more testing to see if it consistently produces good results, but for now it has me rethinking the search routine. I'm actually hoping further testing proves it was just a fluke, otherwise, I'm not sure what I will do to try and find these hidden nuggets of goodness. There's lots of other improvements in my new version, but lots of testing to do before it is ready to share. I'm also thinking it would be nice to put a GUI on it and make it a real plug-in, but there's a steep learning curve for me to figure out how to do that. -Paul
  5. That's very odd, how little memory the beta preclear is using on your system, and how much it is using on mine. I'll run a zero only, skipping the pre/post reads, and see if that has an impact. I would like to bring to light some additional concerns. Last year I suffered a dual-drive failure. Luckily I was able to fully recover (part of the brilliance of unRAID). Since I still had them laying around, I decided to give both of these bad drives a workout in the new beta pre-clear, and I was not pleased with the results. Both drives (WD Red 3TB) have the same problem, they randomly go offline. Through multiple efforts, I've never been able to even complete a single pass, start to finish, on either drive. This is both inside the array and outside the server on my desktop with my various rebuild attempts. On the beta pre-clear, I ran a 3 pass with pre/post reads. One drive (my old parity drive) went offline spectacularly during the first pre-read (my server couldn't see it anymore, but in its place a new drive showed up with 6PB, yes 6 PetaBytes, how I wish!!!). Oddly, the pre-clear didn't report any issues, and when I clicked on the "preview" icon the output window was blank. The other drive (an old data drive) somehow miraculously made it through the first pre-read and the zeroing, before going offline at exactly the beginning of the first post-read - at least that's is what the pre-clear would have me believe. Oddly, I let the first post-read run for several hours, the progress stayed at exactly 0%, but the beta pre-clear script was happily continuing as if nothing was wrong. In reality, I don't think that drive even made it through the zeroing, I think the pre-clear just kept marching on ignoring how the drive was responding. The chances that this drive made it through two complete back to back passes (a read and a zero), when I could even get 1 complete pass in over a dozen tries, only to fail exactly at the beginning of the 3rd pass... I think I have better odds playing the lottery. I plan to run Joe L.'s script and see how it behaves on the same drives. I don't recall if I've ever used his script on a known bad drive, but I'm thinking that it correctly reports when a drive has issues. -Paul
  6. Yikes, looks like I failed "Internets 101". Thanks for the heads up, I corrected the link in my post. Also, in rereading the linked write-up, I had a few other ideas at that time that I've forgotten about: Spin Status, and possibly a toggle Used/free space bar graph Other SMART metric, or links to run SMART Basically, a merging of the data that is on the MAIN tab, with the beautiful layout of the Server Layout plugin, plus the visual heatmap. -Paul
  7. Hey theone, great plugin, I've been using it for over a year, and find it very helpful. About 3 years ago, I wrote up a request to have unMenu display a disk temperature heat map. At the time, people said it was possible, but pushed it back on me to do the coding - but I wasn't smart enough to do it. Fast forward to today, and I'm realizing that your Server Layout Plugin may be the perfect candidate for my idea, because it already has the right layout, it's just missing the temps. Here's a link to the full write up with pictures of my idea: http://lime-technology.com/forum/index.php?topic=27051.msg248318#msg248318 Long story short, what I'm thinking is that: a) It would be real nice to have drive temps displayed along with the other drive info, and b) It would be even nicer if, instead of the brushed metal drive background (which is sharp looking, by the way), that each background would be a color to represent the temperature, from blue = cool, to red = hot hot hot. This would help visually identify hot spots in the server that might need better cooling. Do you think this is doable? Thanks, Paul
  8. I've been doing some more testing, and wanted to share my results. First, I upgraded my server to 16GB (the max of my motherboard). Baseline metrics: On a fresh boot, with array stopped the server is using 4% of allocated memory, with the array running it is using 8% of 16GB. With the array running, I then started a single pre-clear while watching "top". Memory utilization jumped from 8% to 15%. Interestingly, I see multiple "preclear_+" processes. There seems to be a main one, consuming 1.023GB within a few seconds of starting. I occasionally see 1 or 2 more pop up (I'm sorting by memory usage, so they pop to the top for brief moments). When the show up, they are using unique PID's and 1.022GB of RAM each (in addition to the 1.023 used by the main "preclear_+". These extra "preclear_+" processes only stay there a second and then they're gone, and each time they return they have new PID's. If I am interpreting this correctly, there is the main preclear process running all the time, consuming just over 1GB of RAM, and it occasionally creates/destroys 1 or 2 extra processes to do some extra tasks, and these are both consuming about 1GB of RAM each. But I'm not sure I'm interpreting this correctly because, for the brief moments that the extra preclear processes pop up, the free/used/cache/avail mem stats don't move enough to verify that the two extra processes are consuming another 2GB of RAM. I then started a second pre-clear. Increasing the server memory from 4GB to 16GB appears to have addressed the errors I was receiving before. Memory utilization climbed to 21%. On average, memory utilization is going up 6.5% of 16GB, which matches up nearly exactly with the 1.023GB per primary process that I see in Top. Going back to top, I see a second primary "preclear_+" running, this time consuming 1.024 GB instead of 1.023GB. I got smart and applied a filter in top to only show the "preclear_+" processes, and changed the update frequency to 10Hz. It then became very obvious that the two main processes were each constantly spawning and killing two extra processes, about once a second. For brief moments I can see up to six total processes running. And while top seems to indicate that each of the six processes are using 1+GB each, the server memory stats stay rock solid during process creation and destruction, so I'm thinking this is actually memory being shared between a parent process and child processes, so each pre-clear is using 1GB of RAM, and not actually up to 3GB that Top seems to indicate. I also just noticed that 7th and 8th pre-clear processes pops up. This is happening so fast, it's almost impossible to notice. So each pre-clear is actually spawning 3 simultaneious sub-processes each... that I can see. This is all during the pre-read stage. So I think I'm pretty much done with my testing. Each beta pre-clear uses just over 1GB of RAM from start-up, and throwing memory at the problem alleviated the memory related errors. I don't have enough spare disks at the moment to test more than two pre-clears simultaneously, perhaps later. I also looked at the source code to see if I could find a reason for the high memory usage. Nothing popped out at me, but honestly the code is way beyond my skill level. Wish I could help more.
  9. I just ran a "top" to see how much memory is being used. I see there are three "preclear_+" commands running, which makes sense, since I had started three, 2 of which have stopped due to memory issues, but all three underlying processes are still running. All three pre-clears (including the two that stopped) are using 1.15GB of RAM each, which is 30% of my unRAID allocated memory (3.853GB allocated of 4GB installed). I find it interesting that all three are using 1.15GB, and only one has run more than an hour or so (about 24 hours now). This suggests that this isn't a memory leak, but rather all the memory is allocated at pre-clear start up, which agrees with the immediate memory errors I got when starting additional pre-clears. That is a lot of memory for a single process.
  10. Anecdotal evidence leads me to believe you are correct. With the array stopped and 2 pre-clears running overnight, this morning I observed that the server's memory usage was about 75% (as viewed on the Dashboard). The pre-reads had just finished and the zeroing was <10% at that time. My server has 4GB RAM installed. At around 25%-35% of the zeroing, one of the drives stopped preclearing, I got the message "Preclear Finished Successfully!", and when I click the "Preview" button I get a blank window, so I can't see any details. There is nothing in the unRAID log file, or the preclear.disk.beta.log file to give any further insight. I don't know what my memory utilization was at the moment it stopped, but afterwards the memory usage was down to about 49%. I'm thinking my baseline memory utilization was ~25% with the array running, surely less with it stopped. I know memory is cheap and all, and now that we've got 64-bit unRAID that I can throw more memory at the problem, but it seems to me that a pre-clear consuming 1-2GB of RAM is not ideal. To support 4 simultaneous pre-clears, plus keep the array up and running, I would need 16GB. Though it needs retesting to verify, I think I could run the same using Joe L.'s script with my current 4GB. This leads me to ponder if the Beta pre-clear is wasting memory, or if due to improved functionality it simply consumes more memory by design. I would prefer to stick with the Beta version going forward, and I hope that the potential memory issues described here are fixable.
  11. A few additional details: I upgraded to 6.2-RC3, but the issue remained the same as 6.1.9. With the array stopped, I can run 2 pre-clears simultaneously. If I start the array while both pre-clears are running, the pre-clear that was started first is stopped with the same message regarding "fork: Cannot allocate memory". If I stop the array, I can once again run two pre-clears simultaneously. More craziness: with the array stopped, if I attempt to start a 3rd simultaneous pre-clear, the 1st pre-clear stops with the "fork: Cannot allocate memory" message, and the 2nd pre-clear immediately changes to "Preclear Finished Successfully!" (even though it was only 2% into the pre-read). Here's the output from the "Preview": ############################################################################################################################ # # # unRAID Server Pre-Clear of disk /dev/sde # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read in progress: (2% Done) # # # # # # # # # # # # # # # # # # # # ** Time elapsed: 0:16:57 | Current speed: 136 MB/s | Average speed: 85 MB/s # # # ############################################################################################################################ # Cycle elapsed time: 0:16:59 | Total elapsed time: 0:16:59 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status # # # # # # ATTRIBUTE INITIAL STATUS # # 5-Reallocated_Sector_Ct 0 - # # 9-Power_On_Hours 3798 - # # 194-Temperature_Celsius 33 - # # 196-Reallocated_Event_Count 0 - # # 197-Current_Pending_Sector 0 - # # 198-Offline_Uncorrectable 0 - # # 199-UDMA_CRC_Error_Count 0 - # # # # # # # # # # # ############################################################################################################################ # SMART overall-health self-assessment test result: PASSED # ###########################################################################################################################? /usr/local/emhttp/plugins/preclear.disk.beta/script/preclear_disk.sh: fork: Cannot allocate memory /usr/local/emhttp/plugins/preclear.disk.beta/script/preclear_disk.sh: line 597: 88830443520 + : syntax error: operand expected (error token is "+ ") --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Preclear finished succesfully. /usr/local/emhttp/plugins/preclear.disk.beta/script/preclear_disk.sh: line 1302: /boot/preclear_reports/preclear_report_WD-WCC1T1173480_2016 .08.09-19:21:37.txt: Invalid argument root@Tower:/usr/local/emhttp# So it seems that with the array running, I can run only a single pre-clear with the Beta. With the array stopped, I can run a second pre-clear, but no more than 2 simultaneously. With the original Joe L. script and Screen, I've run at least 4 pre-clears simultaneously on this server, though that was on 5.x, not 6.x. Paul
  12. gfjardim, First, thanks very much for creating this plug-in. I've been wishing for a GUI front-end for years. Second, I just got to test it for the first time, and have a problem using the Beta. I'm running 6.1.9, and I am attempting to pre-clear 2 drives at the same time. I am only able to pre-clear one drive at a time. As soon as I start the second drive, the progress on the first drive stops, and I see this message (twice) in the 'Preview' icon output: When I attempt to restart the pre-clear on the first drive, the progress on the second drive stops. This has all been occurring during the pre-read step, as I've never let it go long enough to begin the pre-clearing step before starting the next drive. Diagnostic are attached. The preclear.disk.beta.log was surprisingly short: One last thought: I just read through 51 pages in this thread, only a portion of which is applicable to the Beta, and at times it is confusing which plug-in is being discussed. Perhaps it is time the Beta gets a dedicated thread? It would have also been incredibly helpful if you maintained the first post, including instructions on how to report issues, since that is the first thing a user reads when following the link from Community Applications. I actually thought I had following the wrong link since the first post doesn't mention anything about the separate Beta. Thanks, Paul tower-diagnostics-20160809-1824.zip
  13. Hey everyone, Kizer noticed I hadn't been around in a while, so he sent me a PM and asked me to check in. I just got through reading through several months of posts, and had a few thoughts to share. First, sorry, I've been really busy in work and life, and have completely neglected this script. It has two main issues right now, 1) a file path has changed under v6 (maybe that was 6.1) that breaks the script, and 2) there are changes in v6 that don't align with the testing this script performs. As some have pointed out, you can fix the file path change by either by creating a symbolic link or a simple text replacement of the script. I would update it, but releasing a new version might represent to some a level of compatiblity with 6.x, which is not the case. The script (with the path fix) may work for you on 6.x and even produce good results, but it definitely has not been updated for 6.x. Some additional thoughts: These tunables are all about optimizing your HD controller(s). While the number and size of installed drives has an impact on parity check speeds, I do not believe these tunables compensate for that. Rather, each unique configuration of HD controllers on each system may require different amounts of memory to work efficiently. Tom and LimeTech have chosen default values that are both frugal and often pretty fast. Certain controllers, like the one on my system (see my sig), work horribly with the default settings (regardless of drives attached), and I experience a massive performance boost by throwing more memory at the problem (increasing the values). In other words, the default values might be choking performance by not allocating enough memory for your particular HD controller(s). Each test starts from scratch, and does not look at your current settings other than to inform you how much additional memory the Unthrottled setting would use compared to the current settings. I agree, it would be a good idea to print the current settings as part of the test results. There are no guarantees that the reported settings will ultimately perform as tested. The test simply sets the values, starts a parity check, lets it run for a bit to see how fast it is, stops it, then repeats with the next set of values. It's certainly possible that the process of running multiple partial parity checks might falsely influence the reported speeds - I believe this is evidenced by a couple users who noted that a reboot after setting the new values resulted in horrible performance. These users did the right thing and went back to default values. Not every system has a problem with the default values. That's why I created this script, to try and easily figure out what values might work on any system. Sometimes default is fastest, and sometimes changing the values has no impact - congratulate yourself you have a robust system. I wish my system worked well with default settings, but I have to increase the memory to get decent performance. The Best Bang for the Buck is basically a "fast enough" setting with low memory utilization. Yes, you might be able to go faster, but you have to throw increasing amounts of memory to get there for very little return. I recommend the Best Bang result in almost all cases. As some have pointed out, the script doesn't always get the fastest result at the end of the final pass. Part of this is because of inconsistency between the multiple runs at the exact same setting. The first pass might find test #7 was fastest, but when it zeroes in on that range in the second pass the server may decide to behave differently and give slower results. While I could try and make the script smarter, and ever more complex, I recommend that instead of expecting the script to do everything for you, look at the results yourself and make a judgement call. That's the main reason I print all the results of every test, because the logic to find the fastest and best bang settings are not foolproof. Also, there are manual settings, so if you feel a range of values is worth examining in closer detail, run it in manual mode and target a specific range. If you're ever not sure what settings to use, go with the smallest values that give acceptable results. I've done zero testing on 6.x with this script (sorry, but it's true). I also don't understand the impact of the changes to the tunables settings in 6.x. Because this script can test some extreme memory settings, I always recommend to not be doing anything that might cause data loss while testing. Sure, run your VM's and what not, to load up the system per normal, but don't be reading/writing to the array while testing, because that may both throw off the results and also expose you to data loss if the memory settings somehow run the server out of memory and cause issues. If you have VM's that are reading/writing and you can't somehow stop that activity, then yes do shutdown that VM before testing. Oh, and not to forget Kizer, who called me in: Sorry, I don't know why you got a zero for the results, especially on a 5.0 system. There must be something about your system that is breaking the script. As long as your parity checks complete in a normal amount of time, this is purely a script issue. If your parity check is really slow, you could try increasing these values yourself, manually in the GUI, and run a parity check to test. You can try using the same values the script tests, but since you would be doing it manually, I would compare the default values to maybe something like tests #7, 14 and 20. If you don't see any worth increases in speed, just stick with your defaults, and if you see a worth increase, perhaps run some more manual tests around the good values. Lastly, I'll apologize again, as I'm still busy with work and life, and don't foresee being able to work on this script in the near future. Paul
  14. Hey Hellboy, I'm not sure, but probably it is safe. Primarily the tester is just running a parity check from the command line, so that should be safe regardless of what version is running. The memory parameters that are being adjusted between runs simply affect how much memory is allocated to the process, and should be okay to tweak unless Tom has made changes to these tunables. In fact, with v6 being 64 bit, you may be able to push these numbers much higher than ever before. Worst case, you're still just running a non-correcting parity check, so if something happens your data shouldn't be modified. All that said, let me be clear, I am still on v5. I only have my one production box, and I don't run beta software on it. I have not tested this tool on v6.anything, though I think other users in here have done so. And since you are asking if this is safe to run on 6b10a, my gut is telling me you are running beta on a production box with data you actually care about, something I don't advise, and in that case I really wouldn't advise running this tool either. If you end up running it, be sure to share your results! -Paul
  15. Interesting! I had no idea that this had been changed. I'm not sure of the impact. I haven't played with the 6.0 beta yet, and never will as all I have is a production server. I might do a 6.0 release candidate, but will probably wait until final. I actually have a newer version of the script with new features that I never finished and released. I got very busy on a major project and had to disappear for a while. Everybody wants higher MB/s, no doubt. I honestly don't think you'd ever notice the difference between test 2 and test 21 in day to day usage, as that's only about 1% difference in speed, and not worth the extra memory usage. That's crazy!! I have no idea what happened here. Hope you got it all figured out. You're welcome! Glad you found it of use. I've been gone a long while, and figured this script had fallen into the dark bowels of the unRAID forums, and was surprised to stumble onto it on the first page. Couldn't resist giving a shout out. -Paul
  16. Here are my results while running a 7.1 GB DVD ISO read while running a Parity Check. No matter what values I used for md_num_stripes, performance was not altered by that value. It appears that unRAID 5.0 is prioritizing the Parity Sync over Read traffic. I found that reporting comparable results was also problematic, as the further the Parity Check progressed, the more my Read speeds dropped, most likely because the HD had to seek further and further to find the data to Read as the Parity Sync position got further away from the Read data. You can see this in test results 1-7 below. To 'level set' the results, I did two Test 1's, both at the same Parity Sync position, one with a md_num_stripes a measly 4 stripes larger than md_sync_window, and a second with md_num_stripes 3x the value of md_sync_window. There was no read speed improvement. Test 1 - md_num_stripes=2820 - (7.1 GB) took 2896.03s averaging 2.5 MB/s <--Num Stripes only 4 bigger than Sync Window Test 1 - md_num_stripes=8320 - (7.1 GB) took 2869.10s averaging 2.5 MB/s <--Num Stripes set to 3x the Sync Window Parity Check Speed was barely impacted by running the Read tests simultaneously. My full Parity Check time increased less than 5 minutes, from 7h35m to 7h39m. In these tests, you can see the gradual slowdown of the Read results as the Parity Check progressed, and that the Parity Check finished during Test 8: Test | num_stripes | write_limit | sync_window | Speed ------------------------------------------------------------ 1 | 2820 | 128 | 2816 | 2.5 MB/s 2 | 2948 | 128 | 2816 | 2.3 MB/s 3 | 3076 | 128 | 2816 | 1.9 MB/s 4 | 3204 | 128 | 2816 | 1.9 MB/s 5 | 3332 | 128 | 2816 | 1.7 MB/s 6 | 3460 | 128 | 2816 | 1.6 MB/s 7 | 3588 | 128 | 2816 | 1.6 MB/s 8 | 3716 | 128 | 2816 | 24.7 MB/s 9 | 3844 | 128 | 2816 | 129 MB/s 10 | 3972 | 128 | 2816 | 128 MB/s Here's Test 1 again (from the same Parity Check position as the Test 1 above), but with the higher md_num_stripes: Test | num_stripes | write_limit | sync_window | Speed ------------------------------------------------------------ 1 | 8320 | 128 | 2816 | 2.5 MB/s The read-time of the DVD during a Parity Check was anywhere from 48 minutes (should be watchable) at the beginning of my Parity Check to 75 minutes (would probably have stuttering) at the end of my Parity Check. I don't think a Blu-Ray would have been watchable at all. Most likely the movie was located at the beginning of the drive, and the results might have been different otherwise. Any questions, thoughts or suggestions? I might run another test with lower sync_window values, to see if I can artificially prioritize Reads by starving the Parity Sync. I planning on releasing v3.0 of this utility with the Read test included, so other's can see if it has any impact on their system, but personally I don't see much value in this test. -Paul
  17. The result surprises me because I've set the md_num_stripes to a very low value, purposely trying to starve the reads, and even a value of 8 (yes, eight) didn't starve it. A few users in this post have made a point of the importance of increasing md_num_stripes to very large values above sync+writes, and here I am playing with tiny values and it having no impact whatsoever. Just trying to make sure the read test is working correctly before I get crazy complex by running multiple tests simultaneously. I wasn't trusting the results, so I asked for input.
  18. Tuning md_write_limit has had a huge effect on writes - as you expected. Tuning md_num_stripes has had zero effect on reads, at least on my server. Several users have commented about how they improved read performance by tuning num_stripes. I expected to see something, especially with values at low as 128, 10% of unRAID stock. I tried writing a 10GB file of random data (instead of zero data) to use for the test, but hours later the file still wasn't complete. Now I'm reading a 7.1GB DVD ISO, and I'm getting a constant 128 MB/s regardless of md_num_stripes. I know not everyone's server has shown a response to tuning md_sync_window, perhaps my server just doesn't show a response to md_num_stripes. I guess we'll find out when I release v3.0.
  19. Am I doing something wrong in my Read tests? I'm asking because all of my results, from md_num_stripes=128 to md_num_stripes=2816, had the same read speed (as calculated by dd). I tested with a 5GB file (my server has 4GB of RAM). The file was all zeros (this was a file I had created with the write test). Here's the command I issued for the test: dd if=/mnt/disk$TestDisk/testfile5gbzeros.txt bs=64k of=/dev/null I also drop the caches: sync && echo 3 > /proc/sys/vm/drop_caches I set the write_limit and sync_window both to 128 for the whole read test - I don't think these values affect read performance so I push them way down and out of the way (probably not necessary, but figured someone would ask about it). Here's my results: Test | num_stripes | write_limit | sync_window | Speed ------------------------------------------------------------ 1 | 128 | 128 | 128 | 156 MB/s 2 | 256 | 128 | 128 | 155 MB/s 3 | 384 | 128 | 128 | 156 MB/s 4 | 512 | 128 | 128 | 156 MB/s 5 | 640 | 128 | 128 | 156 MB/s 6 | 768 | 128 | 128 | 156 MB/s 7 | 896 | 128 | 128 | 156 MB/s 8 | 1024 | 128 | 128 | 156 MB/s 9 | 1152 | 128 | 128 | 156 MB/s 10 | 1280 | 128 | 128 | 156 MB/s 11 | 1408 | 128 | 128 | 155 MB/s 12 | 1536 | 128 | 128 | 156 MB/s 13 | 1664 | 128 | 128 | 155 MB/s 14 | 1792 | 128 | 128 | 156 MB/s 15 | 1920 | 128 | 128 | 156 MB/s 16 | 2048 | 128 | 128 | 156 MB/s 17 | 2176 | 128 | 128 | 156 MB/s 18 | 2304 | 128 | 128 | 156 MB/s 19 | 2432 | 128 | 128 | 155 MB/s 20 | 2560 | 128 | 128 | 156 MB/s 21 | 2688 | 128 | 128 | 156 MB/s 22 | 2816 | 128 | 128 | 156 MB/s I'm planning on testing with both a random file and a real DVD ISO file, but the results above caught me off guard, so I thought I would ask the experts in case I had completely missed a vital step or formed my dd test wrong. Thanks, Paul
  20. You're getting carried away trying to make a tool that will do EVERYTHING plus make you some coffee. Focus on the core functionality, and let the users change their default settings if they like. Ever seen the movie Idiocracy? Remember the receptionist's computer in the hospital? That movie invented the Android OS years ahead of its time! Let the users make the change they want. Put in the documentation where to make the change in the gui or file. Later on after there are many tests and if you feel like revisiting, you can update the file automatically with some option switch. Or via some webgui presentation test. I would not do it at this point. Example. I want to run the benchmark without touching my own customized values. What I have works for very well for mel with no pauses and high burst write speed (Which is very important to me). Perhaps neither of you have tried any of the recent versions. For a while, there has been a 'SAVE' option that presents the user with the option to write the values to the disk.cfg file. If the user wants the values written to the file, they simply type SAVE and it's done. The user get's to choose between the fastest and best bang values. If they don't want the new values, they simply don't type SAVE, and their original values are preserved. My question wasn't should I do this or not, I've been doing it for a while, and will continue to do it. This is the type of program I write. My question was simply whether I should maintain the values in disk.cfg, or extra.cfg.
  21. I assume you were looking for the default values, before you start changing things. Remember, those values can be superceeded by: /boot/config/extra.cfg as per: http://lime-technology.com/forum/index.php?topic=4625.msg42091#msg42091 My thoughts on this were that when Tom originally rolled out these tunables, the only way to adjust them was in extra.cfg, and that at some point later he made the adjustment a core part of disk.cfg, and tunable through the GUI. But you make a good point, some may still be using extra.cfg to override the values. When I allow the changes to be saved, should I be only writing to extra.cfg, or is it okay to make the changes in disk.cfg? Since the GUI changes disk.cfg, I was following that model.
  22. Wow. Simply Wow. Patilan and WeeboTech, thank you both for the solutions.
  23. Is there a way in the system I can query the size of the structure stripe_head? I'm not sure if you simply calculated that value of retrieved it from somewhere. If retrievable, I would like to avoid hard coding the value in the utility.
  24. Hey Patilan (or anyone else interested), I have an easy challenge for you. The following doesn't work correctly, as there is a carriage return in the file that survives the evaluation: eval $(cat /boot/config/disk.cfg | egrep "md_num_stripes") To demonstrate, if I do this: echo "md_sync_window=x"$md_sync_window"x" I get this: _sync_window=x2816 Instead of this: md_sync_window=x2816x Which is why I did this, which works but is rather convoluted: md_sync_window=`cat /boot/config/disk.cfg | egrep "md_sync_window" | sed 's|.*"\([0-9]*\)".*|\1|'` Any easy way to do an eval that also strips off the carriage return? -Paul
×
×
  • Create New...