Jump to content

zoggy

Members
  • Posts

    700
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by zoggy

  1. really you should have a branch for whatever code you wanna do.. i'm assuming your going to have a webgui for 5.x and one for 6.x ? you can do this two different ways, 1 repo for each trunk code, so webgui-5 and then webgui-6, keeps them separate but does require double the work.. but if your not really going to be making changes to the gui for 5.x as most efforts i would assume would be directed towards 6.x.. then this may be overkill. you could just have 1 repo as you currently do, and just have the 'master' branch would be the latest 5.x code... you just make a new branch based off 'master' where you can go modify it to your heats content for 6.x. then when your ready you push down master. the key is that you never push to master directly since other branches are based offf it.. you push to 5.x or 6.x then pull down into master from them. if you want to maintain two separate forks of code.. like 5.x in case you want to push out updates later for that.. you just keep that in its own branch. kinda like what we do for sabnzbd https://github.com/sabnzbd/sabnzbd/branches 'master' is the latest code for the current trunk.. 0.7.x. develop has 0.8.x. when we need to make a new update to 0.7.x we put the new code there then push down to master. this keeps 'master' more like a stable branch.. allows us to test code and keep the code diversions to a minimum. http://nvie.com/posts/a-successful-git-branching-model/
  2. could just buy a 2x2tb or a 3tb drive and get rid of most of those drives and reduce the heat+power... but anyways which Noctua you going to try now? The NF-R8?
  3. hehe.. just for giggles i ran that command on /mnt/user/ 161915 Music 96022 Personal 10391 TV 7107 Movies 2017 Media
  4. personally i've given up interest on this.. besides the lack of apparent support on github (only one gatekeeper) its just too cumbersome to test/dev with this current method/setup.
  5. Correct, I don't compare pass 2 results to pass 1, since they run for different lengths of time. Pass 2 results are expected to be slightly lower, as the test extends farther into the parity check, which gradually slows from beginning to end. It is expected that pass 1 gets you into the right range, and pass 2 finds the best value in that range. If I compared pass 2 to pass 1, the majority of the time the logic would probably pick the pass 1 result since it would probably have the faster time due to the shorter test length. Unfortunately some servers are not testing well (he's not slow, he just doesn't test well...). It's not so much about comparing pass 1 to pass 2, rather it's more about servers producing inconsistent results, which no amount of logic can power through. Myself, I'm looking at RockDawg's results and have no freaking idea which values are good values... My point exactly, inconsistent results. Test 36 should have been pretty close, but slightly below test 1. Actually, if you look at the bigger picture (and RockDawg's server is not the first to show this behavior), the 1st test started off at a nice speed, and each subsequent test gets slower, until a lower threshold is reached, and then all results hover around that lower threshold. I would hazard a guess that these md_* values are not actually affecting anything on RockDawg's server - he could run a 512 byte test 10 times in a row, and each subsequent test would be a little bit slower. I think VM is highly suspect. zoggy had nearly identical behavior in his test results: Tunables Report from unRAID Tunables Tester v2.0 by Pauven Test | num_stripes | write_limit | sync_window | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)--- 1 | 1408 | 768 | 512 | 88.0 MB/s 2 | 1536 | 768 | 640 | 87.8 MB/s 3 | 1664 | 768 | 768 | 87.4 MB/s 4 | 1920 | 896 | 896 | 87.0 MB/s 5 | 2176 | 1024 | 1024 | 87.2 MB/s 6 | 2560 | 1152 | 1152 | 86.8 MB/s 7 | 2816 | 1280 | 1280 | 86.6 MB/s 8 | 3072 | 1408 | 1408 | 86.2 MB/s 9 | 3328 | 1536 | 1536 | 86.0 MB/s 10 | 3584 | 1664 | 1664 | 85.7 MB/s 11 | 3968 | 1792 | 1792 | 85.7 MB/s 12 | 4224 | 1920 | 1920 | 86.1 MB/s 13 | 4480 | 2048 | 2048 | 86.2 MB/s 14 | 4736 | 2176 | 2176 | 85.7 MB/s 15 | 5120 | 2304 | 2304 | 85.3 MB/s 16 | 5376 | 2432 | 2432 | 85.3 MB/s 17 | 5632 | 2560 | 2560 | 85.1 MB/s 18 | 5888 | 2688 | 2688 | 85.1 MB/s 19 | 6144 | 2816 | 2816 | 84.8 MB/s 20 | 6528 | 2944 | 2944 | 84.8 MB/s --- Targeting Fastest Result of md_sync_window 512 bytes for Medium Pass --- --- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)--- 21 | 1288 | 768 | 392 | 84.9 MB/s 22 | 1296 | 768 | 400 | 84.8 MB/s 23 | 1304 | 768 | 408 | 84.7 MB/s 24 | 1312 | 768 | 416 | 84.7 MB/s 25 | 1320 | 768 | 424 | 84.4 MB/s 26 | 1328 | 768 | 432 | 84.7 MB/s 27 | 1336 | 768 | 440 | 84.7 MB/s 28 | 1344 | 768 | 448 | 84.4 MB/s 29 | 1360 | 768 | 456 | 84.6 MB/s 30 | 1368 | 768 | 464 | 84.7 MB/s 31 | 1376 | 768 | 472 | 84.3 MB/s 32 | 1384 | 768 | 480 | 84.5 MB/s 33 | 1392 | 768 | 488 | 84.7 MB/s 34 | 1400 | 768 | 496 | 84.6 MB/s 35 | 1408 | 768 | 504 | 84.5 MB/s 36 | 1416 | 768 | 512 | 84.7 MB/s Notice that the 512 byte test is both the fastest and one of the slowest! I talked to zoggy the other day about his build, and I don't think he mentioned VM, but I didn't think to ask either. -Paul nope, no vm here.
  6. most people run a weekly or monthly parity check. anything that cuts this down from the multi-hour event is a welcomed thing... assuming it doesnt greatly affect day-to-day performance. speaking of, i would assume the md_ variables dont really affect the cache drive. thus you could always recommend tuning for parity then rely on cache drive for day-to-day?
  7. just started a full auto test.. seems like you could just use some intelligence here to see the trend and just exit out early rather than waste the time to see the speeds diminished. Since PASS 1 (3min duration with 20 points = 1hr), I could have saved 45mins (if stopping after 5th test) and just went onto the next part. Note that I did not have the stock tunables set.. do you ever record what the user had initially? I made a copy of my unraid drive before doing this test just in case. Here were my values before running this test. boot/config/disk.cfg Grabbed the output from TunablesReport.txt, can see that the 2nd pass appears to be going the same way. Tunables Report from unRAID Tunables Tester v2.0 by Pauven NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Test | num_stripes | write_limit | sync_window | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)--- 1 | 1408 | 768 | 512 | 88.0 MB/s 2 | 1536 | 768 | 640 | 87.8 MB/s 3 | 1664 | 768 | 768 | 87.4 MB/s 4 | 1920 | 896 | 896 | 87.0 MB/s 5 | 2176 | 1024 | 1024 | 87.2 MB/s 6 | 2560 | 1152 | 1152 | 86.8 MB/s 7 | 2816 | 1280 | 1280 | 86.6 MB/s 8 | 3072 | 1408 | 1408 | 86.2 MB/s 9 | 3328 | 1536 | 1536 | 86.0 MB/s 10 | 3584 | 1664 | 1664 | 85.7 MB/s 11 | 3968 | 1792 | 1792 | 85.7 MB/s 12 | 4224 | 1920 | 1920 | 86.1 MB/s 13 | 4480 | 2048 | 2048 | 86.2 MB/s 14 | 4736 | 2176 | 2176 | 85.7 MB/s 15 | 5120 | 2304 | 2304 | 85.3 MB/s 16 | 5376 | 2432 | 2432 | 85.3 MB/s 17 | 5632 | 2560 | 2560 | 85.1 MB/s 18 | 5888 | 2688 | 2688 | 85.1 MB/s 19 | 6144 | 2816 | 2816 | 84.8 MB/s 20 | 6528 | 2944 | 2944 | 84.8 MB/s --- Targeting Fastest Result of md_sync_window 512 bytes for Medium Pass --- --- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)--- 21 | 1288 | 768 | 392 | 84.9 MB/s 22 | 1296 | 768 | 400 | 84.8 MB/s 23 | 1304 | 768 | 408 | 84.7 MB/s 24 | 1312 | 768 | 416 | 84.7 MB/s 25 | 1320 | 768 | 424 | 84.4 MB/s 26 | 1328 | 768 | 432 | 84.7 MB/s 27 | 1336 | 768 | 440 | 84.7 MB/s 28 | 1344 | 768 | 448 | 84.4 MB/s 29 | 1360 | 768 | 456 | 84.6 MB/s 30 | 1368 | 768 | 464 | 84.7 MB/s 31 | 1376 | 768 | 472 | 84.3 MB/s 32 | 1384 | 768 | 480 | 84.5 MB/s 33 | 1392 | 768 | 488 | 84.7 MB/s 34 | 1400 | 768 | 496 | 84.6 MB/s 35 | 1408 | 768 | 504 | 84.5 MB/s 36 | 1416 | 768 | 512 | 84.7 MB/s Completed: 2 Hrs 8 Min 16 Sec. Best Bang for the Buck: Test 1 with a speed of 88.0 MB/s Tunable (md_num_stripes): 1408 Tunable (md_write_limit): 768 Tunable (md_sync_window): 512 These settings will consume 77MB of RAM on your hardware. Unthrottled values for your server came from Test 21 with a speed of 84.9 MB/s Tunable (md_num_stripes): 1288 Tunable (md_write_limit): 768 Tunable (md_sync_window): 392 These settings will consume 70MB of RAM on your hardware. This is -70MB less than your current utilization of 140MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values. Pauven, if you wanna chat you can jump on the unraid irc channel. #unraid on irc.freenode.net If you dont have a irc client, you can connect via the web: http://webchat.freenode.net/
  8. fyi, in 2.0 your fullauto states at the command prompt that its going to take 1.5x length as full parity check. if you continue, the next screen mentions that its going to take 2.1 hours. also, shouldnt a user do a non-correcting parity check BEFORE any test is done to make sure everything is good before continuing?
  9. have you looked into "sysctl vm.highmem_is_dirtyable=1" ? I know that if I enable this command I do see higher speeds with stock nn values. Read this post about it: http://lime-technology.com/forum/index.php?topic=25431.msg240538#msg240538 I wonder if once things are tuned with your script how that command would affect it?
  10. can this script check how much memory is free and back out of what its doing if memory is coming dangerously close to running out?
  11. Thats a little open-ended.. if the plugin installed isnt doing anything..then it should be fine. If the plugin is going to start generating cpu cycles/writing data.. then yes you prob dont want to be using them as it could skew the results. So, to make sure you limit the outside variables you prob should just run this in safe mode and with the array stopped. Pauven prob can answer best or confirm my statement.
  12. doubtful... I don't see unmenu's solution going away anytime soon. Then the newcomer boxcar has potential...
  13. you were suppose to remove the old file (since naming the revision in the file was dropped). the new filename never changes.. thus just wget the latest file always means your up to date.
  14. Hi Tom, should you add code to block access to IE7 and below? for example a windows that says IE8.0 or greater etc... -- Sideband Samurai IE users can just use chrome frame, you dont even need admin rights. Sadly though its being retired on Jan 2014. Anyways, rather than forcing that as short term solution we might just be better off doing something like showing a message to IE7 and older peeps: <!--[if lt IE 7]> <p class="browsehappy">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]-->
  15. Just FYI, Tom is alive and is working on webGui: https://github.com/limetech/webGui/pull/1#issuecomment-22980697
  16. it deletes it when its installed.. thus if you just remove the package from the plugins folder, on reboot the normal webgui will be there (as it wont be removed since the new webgui package isnt being installed)
  17. so seems like this whole webgui dev isnt taking off like one would hope.. maybe post 5.0?
  18. note for ant, when testing responsive design you can use: http://responsive.is/ alright, I'm off to work. I'll check back on this thread tomorrow.
  19. the new ui in ie6 or 7.. somewhat useable but some useability issues for sure (lack of png support/float issues/unsupported css used). not that we should really support it.. since WinXP users can run IE8. Win7 users would be running IE9-10. But just in case you were curious what it looks like:
  20. @husky:/boot/config/plugins/webGui# cat webGui.* | grep devices devices="0" devices="0" however its still showing listing unassigned devices...
  21. most mobile browsers nowdays are more than capable of displaying a webpage correctly.. there doesnt appear to be any non friendly stuff used.. (like dropdown menus on hover). so your mobile device should be fine. you may just have to zoom in to have a decent touch target... but yes one could easily add css3's media queries to make the design responsive for the device being used (tablet/mobile/desktop). this could have been done as well with the previous stock ui..
  22. In case those were wondering.. I have a vm where I run the legacy browsers for testing.. (IE6 & 7 via IE Collection) but natively it has IE8, Firefox 3.6.28, Opera 10.54, Safari 4.0.5, Chrome 5. Tom, any chance getting the forum max total size raised? Having a max size of 192kb then a max total size of 192kb.. means to attach screenshots of before + after changes.. I'd have to use heavy compression or multiple posts. For giggles here is what IE9 shows for stock r3 (notice it also is in computability mode):
  23. just submitted pull. https://github.com/limetech/webGui/pull/1 its going to be very subtle changes if you see any. ideally this gen browsers wont see much changes since a lot of them agree on things.. but the legacy browsers should all be normalized for the most part. also this fixes ie8's broken compatibility mode being used. screen shot attached (notice by the refresh button there is the 'compatibility' icon on the stock r3 and its no longer there after my code changes)
×
×
  • Create New...