Your Chance to Chime In


limetech

Recommended Posts

I wonder what decisiontom has taken now..

He released a poll and the answer is clear..

What did he decide?

 

He posted a day or so ago in the write issue thread saying he wants to fix that before releasing final. He's wanting perfect software before releasing it, which I can understand, but that's also why 5.0 has been in beta for what (2 years?).

 

I have the write speed issue, and still think he should release 5.0 final. The more people on 5.0, the faster bugs are going to get figured out and fixed. None of the existing bugs cause any type of stability problems and 5.0 has been rock solid stable for awhile now.

Link to comment
  • Replies 301
  • Created
  • Last Reply

Top Posters In This Topic

I am going to chime in again... as it seems that there are more than a few users who have different priorities to my own.

 

Why?  -Support for 3Tb drives

 

With support for 3Tb drives, I will not need to upgrade PSU, Case, Sata controller. This is will be at least $300 (AUD)

 

I understand that many users have used the 5.0beta and 5.0RC series and have had mostly good results.

 

I am not keen to move to a beta or RC system mostly because of support. While the beta is fairly fresh it will be supported well on the forum, but the interest from the communitiy in a specific beta like "5.0-rc5-r8168 " will dwindle and leave its users in out in the cold.

 

I have followed the series of beta and RC releaseses and have seen how progressing the beta's has created bugs for some, with the followup beta to introduce show stopper bugs.

 

EG:

You need to move on from a particular RC0 version to fix bug A.

You cant move to RC1 as there is a critical bug that makes it not compatible with your hardware

RC2 could be months away with little to no feedback from Tom as to where it is at.

 

You need to stay on the bleeding edge to get any support from the forum (or stay on 4.7).

 

 

 

Version 5.0 final will help the community, it will mark a reference point and the community will widely accept this version (with any bugs) and there can be documented wiki pages that point out any faults and work arounds.

 

Version 5.0 final will help those who are not keen to fall into limbo in terms of longer term support. While this may not be super important to linux guru's it is a bit to much for those who are a bit fresh and like to treat unRAID as an appliance.

 

unRaid is super reliable for me, this is very important

unRaid 4.7 is well supported, this is important to me

 

I can't touch 5.0beta or 5.0RC as my unRAID system is the heart of my media solution (XBMC) at home, and my wife would be very shitty if it became unreliable/unavailable.

 

 

 

 

Link to comment

Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens?  I'd do it, but I have no idea how.

Yes - the following wiki article covers getting this running: http://lime-technology.com/wiki/index.php/Installing_unRAID_5.0_on_a_full_Slackware_Distro

 

Note that this is not as good as unRAID itself moving to be fully 64-bit.  The wiki article covers getting 32-bit unRAID running on a 64-bit slackware system

Link to comment

I have the write speed issue, and still think he should release 5.0 final. The more people on 5.0, the faster bugs are going to get figured out and fixed. None of the existing bugs cause any type of stability problems and 5.0 has been rock solid stable for awhile now.

 

The write speed issue is pretty severe.  I regularly rip content to my unRAID server, and my desktop machines run nightly backups to it.  I'd be furious if my write speeds dropped to 1-2MB/sec.  I wouldn't find it usable.  I guess I could downgrade, but I wouldn't be thrilled to redo my unRAID server to v5 only to downgrade it a few days later.

 

I'm pretty surprised the poll is as lopsided as it is.  It seems like the write speed bug is a showstopper, at least until he can narrow down the circumstances when it happens.

 

 

Link to comment

I have the write speed issue, and still think he should release 5.0 final. The more people on 5.0, the faster bugs are going to get figured out and fixed. None of the existing bugs cause any type of stability problems and 5.0 has been rock solid stable for awhile now.

 

The write speed issue is pretty severe.  I regularly rip content to my unRAID server, and my desktop machines run nightly backups to it.  I'd be furious if my write speeds dropped to 1-2MB/sec.  I wouldn't find it usable.  I guess I could downgrade, but I wouldn't be thrilled to redo my unRAID server to v5 only to downgrade it a few days later.

 

I'm pretty surprised the poll is as lopsided as it is.  It seems like the write speed bug is a showstopper, at least until he can narrow down the circumstances when it happens.

 

If you read the threads on the problem, you would find that it is limited to a small set of hardware.  Most people are not having an issue with it.  In fact, a goodly portion of the hardware can be made to function properly with a few simple software settings.  But it seems that there are a few cases where these are not addressing the issue.  Apparently, the problem is in the 32 bit kernel and when the Linux boys will get around to addressing it is anyone's guess. 

Link to comment

Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens?  I'd do it, but I have no idea how.

 

I've been running 64bit Linux Kernel version with latest versions of unRAID (32bit emhttp/shfs) for a couple of years now. I even posted a thread or two in here on how to do so.

Link to comment

If you read the threads on the problem, you would find that it is limited to a small set of hardware.  Most people are not having an issue with it.  In fact, a goodly portion of the hardware can be made to function properly with a few simple software settings.  But it seems that there are a few cases where these are not addressing the issue.  Apparently, the problem is in the 32 bit kernel and when the Linux boys will get around to addressing it is anyone's guess.

 

Yes, I saw that.  But unless there was a new breakthrough, I didn't see anything that indicated what hardware is hit by the bug.  The closest thing was that it seemed to be by users with "newer" hardware, though I don't know if that theory has stood up.

 

Basically, how reliably can a user predict whether they would be hit by the bug before upgrading?  It seems like it is a problem if you can't predict it better than random guessing based on the prevalence of the bug on the forum.

Link to comment

If you read the threads on the problem, you would find that it is limited to a small set of hardware.  Most people are not having an issue with it.  In fact, a goodly portion of the hardware can be made to function properly with a few simple software settings.  But it seems that there are a few cases where these are not addressing the issue.  Apparently, the problem is in the 32 bit kernel and when the Linux boys will get around to addressing it is anyone's guess.

 

Yes, I saw that.  But unless there was a new breakthrough, I didn't see anything that indicated what hardware is hit by the bug.  The closest thing was that it seemed to be by users with "newer" hardware, though I don't know if that theory has stood up.

 

Basically, how reliably can a user predict whether they would be hit by the bug before upgrading?  It seems like it is a problem if you can't predict it better than random guessing based on the prevalence of the bug on the forum.

 

i thought it was a specific motherboard/chipset that caused to problem?

Link to comment

Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens?  I'd do it, but I have no idea how.

Yes - the following wiki article covers getting this running: http://lime-technology.com/wiki/index.php/Installing_unRAID_5.0_on_a_full_Slackware_Distro

 

Note that this is not as good as unRAID itself moving to be fully 64-bit.  The wiki article covers getting 32-bit unRAID running on a 64-bit slackware system

 

Yeah just not the same as you mention. It still will hit the 4gb issue won't it?

 

Sent from my Nexus 7 using Tapatalk HD

Link to comment

Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens?  I'd do it, but I have no idea how.

Yes - the following wiki article covers getting this running: http://lime-technology.com/wiki/index.php/Installing_unRAID_5.0_on_a_full_Slackware_Distro

 

Note that this is not as good as unRAID itself moving to be fully 64-bit.  The wiki article covers getting 32-bit unRAID running on a 64-bit slackware system

 

Yeah just not the same as you mention. It still will hit the 4gb issue won't it?

 

Sent from my Nexus 7 using Tapatalk HD

 

No. The linux system will have full access to all the memory. It wont have to do PAE, which is what seems to be partly at fault on those "slow write systems". The only bits limited would possibly be the emhttp and shfs, but you will still benefit from larger file and drive buffers.

 

This is what vmstat and free -l outputs would look like on an 8gig system:

 

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
1  0      0  70268 485604 6452364    0    0    18    16    1    1  0  0 100  0
             total       used       free     shared    buffers     cached
Mem:       8176384    8105868      70516          0     485604    6452364
Low:       8176384    8105868      70516
High:            0          0          0
-/+ buffers/cache:    1167900    7008484

 

 

Link to comment

I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file.  With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms.

 

Note also (in case some missed it) that Tom put some measures in place in RC9 in order to address some media playback glitch issues (although not specifically related to spin-up delays) ...

 

http://lime-technology.com/forum/index.php?topic=25184.msg218799#msg218799

 

I upgraded to rc10 and the bug still exists.

Link to comment

I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file.  With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms.

 

I could see that being a decent work around, but IMO it's just that - a work around.  It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it.  Then I would still get an interruption in playback. 

 

Tom - If you're still reading this thread, do you have any idea what the cause of this is?

Link to comment

I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file.  With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms.

 

I could see that being a decent work around, but IMO it's just that - a work around.  It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it.  Then I would still get an interruption in playback. 

 

Tom - If you're still reading this thread, do you have any idea what the cause of this is?

 

Sorry, I just finished reading through all 21 pages of the "X9SCM-F slow write speed, good read speed" thread, which completely cleared my brains short-term memory cache... what problem are you referring to??

Link to comment

I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file.  With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms.

 

I could see that being a decent work around, but IMO it's just that - a work around.  It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it.  Then I would still get an interruption in playback. 

 

Tom - If you're still reading this thread, do you have any idea what the cause of this is?

 

Sorry, I just finished reading through all 21 pages of the "X9SCM-F slow write speed, good read speed" thread, which completely cleared my brains short-term memory cache... what problem are you referring to??

 

The problem where accessing a spun down drive while streaming media from unRAID results in interruptions in the playing stream.

 

Link to comment

I appreciate the feedback.  I thought you were suggesting the cache drive was the difference.  My bad.  I've read numerous other posts about this same issue and I've experienced it across multiple machines running numerous different versions of unRAID.  You're fortunate to not have the problem.

Link to comment

I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file.  With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms.

 

I could see that being a decent work around, but IMO it's just that - a work around.  It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it.  Then I would still get an interruption in playback. 

 

Tom - If you're still reading this thread, do you have any idea what the cause of this is?

 

Sorry, I just finished reading through all 21 pages of the "X9SCM-F slow write speed, good read speed" thread, which completely cleared my brains short-term memory cache... what problem are you referring to??

 

The problem where accessing a spun down drive while streaming media from unRAID results in interruptions in the playing stream.

What m/b and disk controllers are you using?  The 'spinup groups' feature was added back in the day when IDE drives were still common and SATA controllers were just 2-channel IDE controllers in disguise.  On an IDE controller, if you are streaming from one drive, say the master, and you spin up the slave drive, media pauses because the channel is hanging on the spinup.  This could also happen with older SATA ports.  Spinup groups lets you tag such pairs or sets of drives you want to spin up/down as a group.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.