unevent Posted January 20, 2013 Share Posted January 20, 2013 Unevent: There are a lot of mentions of the word "crap" in your post associated with plugins.. I am sure the rest of us would not use both in the same sentence. If you cut all the CRAP out of your post then you are stating: "make sure the amount of plugins you run is alligned with the hardware you run. The fact that a lot of plugins are there does not mean you need to run them all." Was not my intention to associate crap with plugins or vice versa, only express crap-load of plugins in the context of limited hardware resources if that makes it sound better. The intention is to remove the crap-load of plugins from the RC10 memory issue at hand as well as perhaps plant the seed of better plugin control for another thread. Quote Link to comment
WeeboTech Posted January 20, 2013 Share Posted January 20, 2013 For further discussion sysctl vm.highmem_is_dirtyable=1 seems to have a positive effect w write speed http://lime-technology.com/forum/index.php?topic=25431.msg221288 EDIT: Let me add, has a positive effect w write speed on a normal system with small files and lots of memory. See results in thread and test for yourself. It's worthy of a test for your own usage. Quote Link to comment
Helmonder Posted January 20, 2013 Share Posted January 20, 2013 The mem parameter at bootup works better, if I had not set it myself I would not have known there was an issue.. Quote Link to comment
dwoods99 Posted January 20, 2013 Share Posted January 20, 2013 Tom, I'm just catching up and would like to cast my vote if it isn't too late: 1) Freeze any feature development and resolve any open 'CRITICAL' bugs with the 5.0 RC's - a task which it sounds like it may already be complete (performance issues on select hardware configurations are not critical bugs). As part of this, you might want to post a "Last Call for 5.0 RC10 'CRITICAL' Bug Reports" on the forum. 2) Release 5.0.0-x32-Final. Include in the release notes that this 32-bit version does not support memory configurations greater than 4GB, which have been noted to sometimes result in performance issues. Also list any hardware platforms that may have performance issues which may or may not be related to the amount of installed RAM. Users are certainly entitled to run unRAID on any hardware configuration they prefer, but should be informed that choosing to run certain problematic hardware or excessive memory limits their support. Of course, any currently 'Known Issues' should be documented in the release notes. 3) Provide immediate and ongoing support for any new 5.0.0-x32-Final issues that are rated 'HIGH' or 'CRITICAL', releasing 5.0.n-x32 versions as necessary. 4) Once support calms down to only 'MEDIUM' or lower issues, give unRAID a 4-week 64-bit evaluation window. With the clock ticking, create and release 5.1.0-x64-Alpha1 with no changes from 5.0.n-x32-Final other than those necessary to recompile under x64. This release is purely to test the waters and determine if 64-bit is a currently viable path forward for unRAID. Release only to a selected alpha test group, and if it proves successful, proceed to a 5.1.0-x64-Beta1 general population test release. During the 4-week window you may code bug fixes, as long as that effort does not extend beyond the 4-week evaluation window. At the end of the 4-week evaluation window, if x64 unRAID is not ready for a Release Candidate, revert to x32 development. 5) Regardless of outcome of the x64 experiment, please share the results with the community. 6) Resume feature development on the chosen platform, either x32 or x64, but not both. ... Having been off these forums for a while, and now trying to catch up.... The above is the best post IMHO in regards to v5 release. Like many users I have no immediate need for 64-bit that can't wait 3-15 months to show up later. What's needed now are the v5 features available which were not there in v4.7 (2TB+ HDD support and reading connected ext3 and ntfs drives to copy files into array). Putting out a stable/final v5 release with this allows people to still re-use older 32-bit computers but with newer technology for HDD and Gigabit transfer, as these are more relevant than RAM and CPU for unRaid performance (IMO). If v5.0 users need to throttle their RAM to use cache-dirs, so be it. Next release can be a more stable v5.1 32-bit (final) release for 32-bit that can still be used for years down the road. Release v5.5 or v6.0 as the start of 64-bit and don't turn back after that... unless a critical/security bug is discovered in 32-bit and requires a v5.1b or v5.2 critical update. From everything I've read up to now, v5r10 will satisfy 80% of your current user base, 10% can find a work-around, 5% will never upgrade till later, and the others (sorry) can wait. Quote Link to comment
Joe L. Posted January 21, 2013 Share Posted January 21, 2013 I would love to see the option for cache_dirs that allows one to change the cache_pressure easily instead of editing the file. The default may be too aggressive to use across the board when it is unknown how much crap people are running on a low memory and underpowered systems (as an example for addons). That ability was added about 6 versions ago. (from the comments at the top of the program) You already have the ability to invoke it with different cache_pressure options. Version 1.6 # - Changed vfs_cache_pressure setting to be 1 instead of 0 by default. # - Added "-p cache_pressure" to allow experimentation with vfs_cache_pressure values I guess I never noticed I did not print that option letter in the "usage" and the alternate value did not continue to be in effect once cache_dirs suspended itself waiting when you used the -w option. ( in other words,it was undocumented and a bug existed if you did use the -p option and also used the -w option) I'll post a fixed version shortly. Edit: (Version 1.6.7 now attached to the first post in the cache_dirs thread) Joe L. Quote Link to comment
Berg Posted January 21, 2013 Share Posted January 21, 2013 I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution. In the mean time, the use of a cache drive has really helped mitigate the problem. I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read! Quote Link to comment
Frank1940 Posted January 21, 2013 Share Posted January 21, 2013 I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution. In the mean time, the use of a cache drive has really helped mitigate the problem. I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read! amen, Amen, AMEN!!!! Quote Link to comment
Carpet3 Posted January 21, 2013 Share Posted January 21, 2013 Release it already! Quote Link to comment
dave_m Posted January 21, 2013 Share Posted January 21, 2013 what does 4.7 default to? 63 or 64? preclear the opposite of default then install a new 4.7 so it reverts to default again. 4.7 defaults to sector 63. During my tests I changed the default from 63 to 64 and back booting up a new install of 4.7 each time and it had no effect on the drives. At this point I am out of ways to easily recreate the issue simply using a unclean power down or switching from an existing 4.7 install to a new install. The closest I got to losing data was if I forced unraid into looking like the disks had just been assigned in an initial configuration so the user was prompted to start the array for the first time, even though the assignments were wrong based on the previous array. Even then there was no MBR corruption. Quote Link to comment
Joe L. Posted January 21, 2013 Share Posted January 21, 2013 what does 4.7 default to? 63 or 64? preclear the opposite of default then install a new 4.7 so it reverts to default again. 4.7 defaults to sector 63. During my tests I changed the default from 63 to 64 and back booting up a new install of 4.7 each time and it had no effect on the drives. At this point I am out of ways to easily recreate the issue simply using a unclean power down or switching from an existing 4.7 install to a new install. The closest I got to losing data was if I forced unraid into looking like the disks had just been assigned in an initial configuration so the user was prompted to start the array for the first time, even though the assignments were wrong based on the previous array. Even then there was no MBR corruption. Did you have any drives in the array set to sector 64 start? (in other words, not equal to the default of stock unRAID) Joe L. Quote Link to comment
dave_m Posted January 21, 2013 Share Posted January 21, 2013 Did you have any drives in the array set to sector 64 start? (in other words, not equal to the default of stock unRAID) Joe L. Mixed, both 63 and 64. Quote Link to comment
pantner Posted January 21, 2013 Share Posted January 21, 2013 I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution. In the mean time, the use of a cache drive has really helped mitigate the problem. I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read! limiting your usable memory to 4095MB didn't work? Quote Link to comment
Berg Posted January 22, 2013 Share Posted January 22, 2013 I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution. In the mean time, the use of a cache drive has really helped mitigate the problem. I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read! limiting your usable memory to 4095MB didn't work? I haven't tried. Didn't know that was a solution (I am one of those users ). How would I limit the usable memory? Quote Link to comment
pantner Posted January 22, 2013 Share Posted January 22, 2013 I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution. In the mean time, the use of a cache drive has really helped mitigate the problem. I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read! limiting your usable memory to 4095MB didn't work? I haven't tried. Didn't know that was a solution (I am one of those users ). How would I limit the usable memory? lots of talk in this thread and in the RC10 thread. limetech even links to a thread discussing the problem in the first post of this thread The current state of the code is this. It seems to work fine for a 5.0 release. Some rough edges, some more functionality to add/change, but all-in-all I think it's good to go except for the "slow write" issue being hashed out in this thread: http://lime-technology.com/forum/index.php?topic=22675.0 have a look around this post http://lime-technology.com/forum/index.php?topic=25250.msg220983#msg220983 Quote Link to comment
Berg Posted January 22, 2013 Share Posted January 22, 2013 I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution. In the mean time, the use of a cache drive has really helped mitigate the problem. I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read! limiting your usable memory to 4095MB didn't work? I haven't tried. Didn't know that was a solution (I am one of those users ). How would I limit the usable memory? lots of talk in this thread and in the RC10 thread. limetech even links to a thread discussing the problem in the first post of this thread The current state of the code is this. It seems to work fine for a 5.0 release. Some rough edges, some more functionality to add/change, but all-in-all I think it's good to go except for the "slow write" issue being hashed out in this thread: http://lime-technology.com/forum/index.php?topic=22675.0 have a look around this post http://lime-technology.com/forum/index.php?topic=25250.msg220983#msg220983 Thanks. I just had not had the time to sift thru all of this. I'll take a look at this discussion and see if it helps. Thank you. Quote Link to comment
RockDawg Posted January 22, 2013 Share Posted January 22, 2013 While I have no opinion on the matter of determining the stable release, and I know everyone has different opinions on what is or should be prioritized, there is a pretty big (IMHO) bug/limitation that causes an interruption of data transmission during simple operations like browsing directories or copying files to unRAID. There are a few threads discussing the issue and here is one such thread: http://lime-technology.com/forum/index.php?topic=20313.0 Being that unRAID's primary focus is serving media, I feel this issue should get some serious attention as it directly and adversely affects performance during such usage. I can't tell you how many times I experience an interruption while watching video or listening to music. I'm not going to say that I think this needs to be fixed before 5.0 Final and I realize Tom probably has a lot on his plate as it is, but again, we are talking about basic functionality here. Just my $0.02. Quote Link to comment
itimpi Posted January 22, 2013 Share Posted January 22, 2013 I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file. With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms. Quote Link to comment
boof Posted January 22, 2013 Share Posted January 22, 2013 I see this too but I've put it down to the useless windows client end (see also explorer activity stopping when a drive spins up). Other clients all accessing unraid at the time don't see any stuttering or problem with streaming, only the single machine that's trying to do multiple things at once. So either rubbish client side from windows or something in samba for the single daemon handling requests from that client. I haven't tested to another non unraid samba server which would be a good test to rule out (or in) unraid specific implementation. Quote Link to comment
skank Posted January 22, 2013 Share Posted January 22, 2013 I wonder what decisiontom has taken now.. He released a poll and the answer is clear.. What did he decide? Quote Link to comment
S80_UK Posted January 22, 2013 Share Posted January 22, 2013 I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file. With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms. Note also (in case some missed it) that Tom put some measures in place in RC9 in order to address some media playback glitch issues (although not specifically related to spin-up delays) ... http://lime-technology.com/forum/index.php?topic=25184.msg218799#msg218799 Quote Link to comment
RockDawg Posted January 22, 2013 Share Posted January 22, 2013 I missed that and I'm still running RC8. I'll give that a try. Thanks. Quote Link to comment
S80_UK Posted January 22, 2013 Share Posted January 22, 2013 I missed that and I'm still running RC8. I'll give that a try. Thanks. And RC10 is current... http://lime-technology.com/forum/index.php?topic=25250.msg219413#msg219413 Quote Link to comment
garycase Posted January 23, 2013 Share Posted January 23, 2013 "... I wonder what decision tom has taken now... " ==> A lot of us are probably wondering that I think the key decision point is simple: Based on his knowledge of current issues, does Tom consider RC10 as stable as 4.7? If yes -- then release. If no -- then wait until the answer is yes. The "stuttering" issues discussed in the threads linked to above also exist in 4.7, although when I see them it's almost certainly due to drive spin-ups. It also sounds like cache_dirs mitigates it a lot. An interesting side note: Tomorrow (24th) is the 2-year anniversary of the release of 4.7 Quote Link to comment
wsume99 Posted January 23, 2013 Share Posted January 23, 2013 Release 5.0 final and start working on a 64-bit upgrade but limit it to just that. Don't add any other capability because that will lengthen the development time, just convert to 64-bit kernel. FWIW - This is why there should have been a 4.8 release to add support for >2TB drives. That could have been released a long time ago. There was a poll way back where the users voted to include a whole host of new features rather than simply add >2TB support yet reading through this thread most people supported a 5.0 final release because we need to have >2TB support. Quote Link to comment
JustinChase Posted January 23, 2013 Share Posted January 23, 2013 I'd like to see an "official" plugin system implemented sooner than later. It was 'promised' for v5, but I don't want to see 'final' held up for this. I think a x64 version would be great, for those folks that use more memory, but it seems the plugin authors are holding off on making some 'final' changes until the official system is released. Having dependencies all managed by unraid seems like it would benefit most everyone, amongst other things. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.