Your Chance to Chime In


limetech

Recommended Posts

Unevent: There are a lot of mentions of the word "crap" in your post associated with plugins.. I am sure the rest of us would not use both in the same sentence.

 

If you cut all the CRAP out of your post then you are stating:

 

"make sure the amount of plugins you run is alligned with the hardware you run. The fact that a lot of plugins are there does not mean you need to run them all."

 

Was not my intention to associate crap with plugins or vice versa, only express crap-load of plugins in the context of limited hardware resources if that makes it sound better.  The intention is to remove the crap-load of plugins from the RC10 memory issue at hand as well as perhaps plant the seed of better plugin control for another thread.

Link to comment
  • Replies 301
  • Created
  • Last Reply

Top Posters In This Topic

Tom, I'm just catching up and would like to cast my vote if it isn't too late:

 

1)  Freeze any feature development and resolve any open 'CRITICAL' bugs with the 5.0 RC's - a task which it sounds like it may already be complete (performance issues on select hardware configurations are not critical bugs).  As part of this, you might want to post a "Last Call for 5.0 RC10 'CRITICAL' Bug Reports" on the forum.

 

2)  Release 5.0.0-x32-Final.  Include in the release notes that this 32-bit version does not support memory configurations greater than 4GB, which have been noted to sometimes result in performance issues.  Also list any hardware platforms that may have performance issues which may or may not be related to the amount of installed RAM.  Users are certainly entitled to run unRAID on any hardware configuration they prefer, but should be informed that choosing to run certain problematic hardware or excessive memory limits their support.  Of course, any currently 'Known Issues' should be documented in the release notes.

 

3) Provide immediate and ongoing support for any new 5.0.0-x32-Final issues that are rated 'HIGH' or 'CRITICAL', releasing 5.0.n-x32 versions as necessary.

 

4) Once support calms down to only 'MEDIUM' or lower issues, give unRAID a 4-week 64-bit evaluation window.  With the clock ticking, create and release 5.1.0-x64-Alpha1 with no changes from 5.0.n-x32-Final other than those necessary to recompile under x64.  This release is purely to test the waters and determine if 64-bit is a currently viable path forward for unRAID.  Release only to a selected alpha test group, and if it proves successful, proceed to a 5.1.0-x64-Beta1 general population test release.  During the 4-week window you may code bug fixes, as long as that effort does not extend beyond the 4-week evaluation window. At the end of the 4-week evaluation window, if x64 unRAID is not ready for a Release Candidate, revert to x32 development.

 

5) Regardless of outcome of the x64 experiment, please share the results with the community.

 

6) Resume feature development on the chosen platform, either x32 or x64, but not both.

...

 

Having been off these forums for a while, and now trying to catch up....

The above is the best post IMHO in regards to v5 release.

Like many users I have no immediate need for 64-bit that can't wait 3-15 months to show up later.

What's needed now are the v5 features available which were not there in v4.7 (2TB+ HDD support and reading connected ext3 and ntfs drives to copy files into array).

 

Putting out a stable/final v5 release with this allows people to still re-use older 32-bit computers but with newer technology for HDD and Gigabit transfer, as these are more relevant than RAM and CPU for unRaid performance (IMO).

 

If v5.0 users need to throttle their RAM to use cache-dirs, so be it. Next release can be a more stable v5.1 32-bit (final) release for 32-bit that can still be used for years down the road.

 

Release v5.5 or v6.0 as the start of 64-bit and don't turn back after that... unless a critical/security bug is discovered in 32-bit and requires a v5.1b or v5.2 critical update.

 

From everything I've read up to now, v5r10 will satisfy 80% of your current user base, 10% can find a work-around, 5% will never upgrade till later, and the others (sorry) can wait.

 

Link to comment

I would love to see the option for cache_dirs that allows one to change the cache_pressure easily instead of editing the file.  The default may be too aggressive to use across the board when it is unknown how much crap people are running on a low memory and underpowered systems (as an example for addons). 

That ability was added about 6 versions ago.  (from the comments at the top of the program)  You already have the ability to invoke it with different cache_pressure options.

Version 1.6 

#              - Changed vfs_cache_pressure setting to be 1 instead of 0 by default.

#              - Added "-p cache_pressure" to allow experimentation with vfs_cache_pressure values

 

I guess I never noticed I did not print that option letter in the "usage" and the alternate value did not continue to be in effect once cache_dirs suspended itself waiting when you used the -w option.  ( in other words,it was undocumented and a bug existed if you did use the -p option and also used the -w option)

 

I'll post a fixed version shortly. 

 

Edit: (Version 1.6.7 now attached to the first post in the cache_dirs thread)

 

Joe L.

Link to comment

I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution.

 

In the mean time, the use of a cache drive has really helped mitigate the problem.

 

I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read!

 

 

Link to comment

I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution.

 

In the mean time, the use of a cache drive has really helped mitigate the problem.

 

I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read!

 

amen, Amen, AMEN!!!!

Link to comment
what does 4.7 default to? 63 or 64? preclear the opposite of default then install a new 4.7 so it reverts to default again.

 

4.7 defaults to sector 63.  During my tests I changed the default from 63 to 64 and back booting up a new install of 4.7 each time and it had no effect on the drives. At this point I am out of ways to easily recreate the issue simply using a unclean power down or switching from an existing 4.7 install to a new install.

 

The closest I got to losing data was if I forced unraid into looking like the disks had just been assigned in an initial configuration so the user was prompted to start the array for the first time, even though the assignments were wrong based on the previous array.  Even then there was no MBR corruption.

Link to comment

what does 4.7 default to? 63 or 64? preclear the opposite of default then install a new 4.7 so it reverts to default again.

 

4.7 defaults to sector 63.  During my tests I changed the default from 63 to 64 and back booting up a new install of 4.7 each time and it had no effect on the drives. At this point I am out of ways to easily recreate the issue simply using a unclean power down or switching from an existing 4.7 install to a new install.

 

The closest I got to losing data was if I forced unraid into looking like the disks had just been assigned in an initial configuration so the user was prompted to start the array for the first time, even though the assignments were wrong based on the previous array.  Even then there was no MBR corruption.

Did you have any drives in the array set to sector 64 start?  (in other words, not equal to the default of stock unRAID)   

 

Joe L.

Link to comment

I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution.

 

In the mean time, the use of a cache drive has really helped mitigate the problem.

 

I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read!

 

limiting your usable memory to 4095MB didn't work?

Link to comment

I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution.

 

In the mean time, the use of a cache drive has really helped mitigate the problem.

 

I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read!

 

limiting your usable memory to 4095MB didn't work?

 

I haven't tried. Didn't know that was a solution (I am one of those users ;) ). How would I limit the usable memory?

 

 

Link to comment

I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution.

 

In the mean time, the use of a cache drive has really helped mitigate the problem.

 

I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read!

 

limiting your usable memory to 4095MB didn't work?

 

I haven't tried. Didn't know that was a solution (I am one of those users ;) ). How would I limit the usable memory?

 

lots of talk in this thread and in the RC10 thread.

 

limetech even links to a thread discussing the problem in the first post of this thread

 

The current state of the code is this.  It seems to work fine for a 5.0 release.  Some rough edges, some more functionality to add/change, but all-in-all I think it's good to go except for the "slow write" issue being hashed out in this thread:

http://lime-technology.com/forum/index.php?topic=22675.0

 

have a look around this post

 

http://lime-technology.com/forum/index.php?topic=25250.msg220983#msg220983

Link to comment

I just voted. As one of the users with the board in question and who is suffering from slow write speeds, I look forward to a solution.

 

In the mean time, the use of a cache drive has really helped mitigate the problem.

 

I say release with a good article explaining the problem and some solutions. That said, it will increase complaints/support request as people don't read!

 

limiting your usable memory to 4095MB didn't work?

 

I haven't tried. Didn't know that was a solution (I am one of those users ;) ). How would I limit the usable memory?

 

lots of talk in this thread and in the RC10 thread.

 

limetech even links to a thread discussing the problem in the first post of this thread

 

The current state of the code is this.  It seems to work fine for a 5.0 release.  Some rough edges, some more functionality to add/change, but all-in-all I think it's good to go except for the "slow write" issue being hashed out in this thread:

http://lime-technology.com/forum/index.php?topic=22675.0

 

have a look around this post

 

http://lime-technology.com/forum/index.php?topic=25250.msg220983#msg220983

 

Thanks. I just had not had the time to sift thru all of this.

 

I'll take a look at this discussion and see if it helps. Thank you.

 

Link to comment

While I have no opinion on the matter of determining the stable release, and I know everyone has different opinions on what is or should be prioritized, there is a pretty big (IMHO) bug/limitation that causes an interruption of data transmission during simple operations like browsing directories or copying files to unRAID.  There are a few threads discussing the issue and here is one such thread:

 

http://lime-technology.com/forum/index.php?topic=20313.0

 

Being that unRAID's primary focus is serving media, I feel this issue should get some serious attention as it directly and adversely affects performance during such usage.  I can't tell you how many times I experience an interruption while watching video or listening to music.  I'm not going to say that I think this needs to be fixed before 5.0 Final and I realize Tom probably has a lot on his plate as it is, but again, we are talking about basic functionality here.  Just my $0.02.

Link to comment

I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file.  With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms.

Link to comment

I see this too but I've put it down to the useless windows client end (see also explorer activity stopping when a drive spins up).

 

Other clients all accessing unraid at the time don't see any stuttering or problem with streaming, only the single machine that's trying to do multiple things at once.

 

So either rubbish client side from windows or something in samba for the single daemon handling requests from that client.

 

I haven't tested to another non unraid samba server which would be a good test to rule out (or in) unraid specific implementation.

Link to comment

I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file.  With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms.

 

Note also (in case some missed it) that Tom put some measures in place in RC9 in order to address some media playback glitch issues (although not specifically related to spin-up delays) ...

 

http://lime-technology.com/forum/index.php?topic=25184.msg218799#msg218799

Link to comment

"... I wonder what decision tom has taken now... "  ==>  A lot of us are probably wondering that  :)

 

I think the key decision point is simple:  Based on his knowledge of current issues, does Tom consider RC10 as stable as 4.7?    If yes -- then release.    If no -- then wait until the answer is yes.

 

The "stuttering" issues discussed in the threads linked to above also exist in 4.7, although when I see them it's almost certainly due to drive spin-ups.    It also sounds like cache_dirs mitigates it a lot.

 

An interesting side note:  Tomorrow (24th) is the 2-year anniversary of the release of 4.7  :)

 

Link to comment

Release 5.0 final and start working on a 64-bit upgrade but limit it to just that. Don't add any other capability because that will lengthen the development time, just convert to 64-bit kernel.

 

FWIW - This is why there should have been a 4.8 release to add support for >2TB drives. That could have been released a long time ago. There was a poll way back where the users voted to include a whole host of new features rather than simply add >2TB support yet reading through this thread most people supported a 5.0 final release because we need to have >2TB support.

Link to comment

I'd like to see an "official" plugin system implemented sooner than later.  It was 'promised' for v5, but I don't want to see 'final' held up for this.

 

I think a x64 version would be great, for those folks that use more memory, but it seems the plugin authors are holding off on making some 'final' changes until the official system is released.

 

Having dependencies all managed by unraid seems like it would benefit most everyone, amongst other things.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.