tyrindor Posted January 23, 2013 Share Posted January 23, 2013 I wonder what decisiontom has taken now.. He released a poll and the answer is clear.. What did he decide? He posted a day or so ago in the write issue thread saying he wants to fix that before releasing final. He's wanting perfect software before releasing it, which I can understand, but that's also why 5.0 has been in beta for what (2 years?). I have the write speed issue, and still think he should release 5.0 final. The more people on 5.0, the faster bugs are going to get figured out and fixed. None of the existing bugs cause any type of stability problems and 5.0 has been rock solid stable for awhile now. Quote Link to comment
jumperalex Posted January 23, 2013 Share Posted January 23, 2013 Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens? I'd do it, but I have no idea how. Quote Link to comment
kortina Posted January 23, 2013 Share Posted January 23, 2013 I am going to chime in again... as it seems that there are more than a few users who have different priorities to my own. Why? -Support for 3Tb drives With support for 3Tb drives, I will not need to upgrade PSU, Case, Sata controller. This is will be at least $300 (AUD) I understand that many users have used the 5.0beta and 5.0RC series and have had mostly good results. I am not keen to move to a beta or RC system mostly because of support. While the beta is fairly fresh it will be supported well on the forum, but the interest from the communitiy in a specific beta like "5.0-rc5-r8168 " will dwindle and leave its users in out in the cold. I have followed the series of beta and RC releaseses and have seen how progressing the beta's has created bugs for some, with the followup beta to introduce show stopper bugs. EG: You need to move on from a particular RC0 version to fix bug A. You cant move to RC1 as there is a critical bug that makes it not compatible with your hardware RC2 could be months away with little to no feedback from Tom as to where it is at. You need to stay on the bleeding edge to get any support from the forum (or stay on 4.7). Version 5.0 final will help the community, it will mark a reference point and the community will widely accept this version (with any bugs) and there can be documented wiki pages that point out any faults and work arounds. Version 5.0 final will help those who are not keen to fall into limbo in terms of longer term support. While this may not be super important to linux guru's it is a bit to much for those who are a bit fresh and like to treat unRAID as an appliance. unRaid is super reliable for me, this is very important unRaid 4.7 is well supported, this is important to me I can't touch 5.0beta or 5.0RC as my unRAID system is the heart of my media solution (XBMC) at home, and my wife would be very shitty if it became unreliable/unavailable. Quote Link to comment
itimpi Posted January 23, 2013 Share Posted January 23, 2013 Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens? I'd do it, but I have no idea how. Yes - the following wiki article covers getting this running: http://lime-technology.com/wiki/index.php/Installing_unRAID_5.0_on_a_full_Slackware_Distro Note that this is not as good as unRAID itself moving to be fully 64-bit. The wiki article covers getting 32-bit unRAID running on a 64-bit slackware system Quote Link to comment
reggie14 Posted January 23, 2013 Share Posted January 23, 2013 I have the write speed issue, and still think he should release 5.0 final. The more people on 5.0, the faster bugs are going to get figured out and fixed. None of the existing bugs cause any type of stability problems and 5.0 has been rock solid stable for awhile now. The write speed issue is pretty severe. I regularly rip content to my unRAID server, and my desktop machines run nightly backups to it. I'd be furious if my write speeds dropped to 1-2MB/sec. I wouldn't find it usable. I guess I could downgrade, but I wouldn't be thrilled to redo my unRAID server to v5 only to downgrade it a few days later. I'm pretty surprised the poll is as lopsided as it is. It seems like the write speed bug is a showstopper, at least until he can narrow down the circumstances when it happens. Quote Link to comment
Frank1940 Posted January 23, 2013 Share Posted January 23, 2013 I have the write speed issue, and still think he should release 5.0 final. The more people on 5.0, the faster bugs are going to get figured out and fixed. None of the existing bugs cause any type of stability problems and 5.0 has been rock solid stable for awhile now. The write speed issue is pretty severe. I regularly rip content to my unRAID server, and my desktop machines run nightly backups to it. I'd be furious if my write speeds dropped to 1-2MB/sec. I wouldn't find it usable. I guess I could downgrade, but I wouldn't be thrilled to redo my unRAID server to v5 only to downgrade it a few days later. I'm pretty surprised the poll is as lopsided as it is. It seems like the write speed bug is a showstopper, at least until he can narrow down the circumstances when it happens. If you read the threads on the problem, you would find that it is limited to a small set of hardware. Most people are not having an issue with it. In fact, a goodly portion of the hardware can be made to function properly with a few simple software settings. But it seems that there are a few cases where these are not addressing the issue. Apparently, the problem is in the 32 bit kernel and when the Linux boys will get around to addressing it is anyone's guess. Quote Link to comment
BRiT Posted January 24, 2013 Share Posted January 24, 2013 Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens? I'd do it, but I have no idea how. I've been running 64bit Linux Kernel version with latest versions of unRAID (32bit emhttp/shfs) for a couple of years now. I even posted a thread or two in here on how to do so. Quote Link to comment
reggie14 Posted January 24, 2013 Share Posted January 24, 2013 If you read the threads on the problem, you would find that it is limited to a small set of hardware. Most people are not having an issue with it. In fact, a goodly portion of the hardware can be made to function properly with a few simple software settings. But it seems that there are a few cases where these are not addressing the issue. Apparently, the problem is in the 32 bit kernel and when the Linux boys will get around to addressing it is anyone's guess. Yes, I saw that. But unless there was a new breakthrough, I didn't see anything that indicated what hardware is hit by the bug. The closest thing was that it seemed to be by users with "newer" hardware, though I don't know if that theory has stood up. Basically, how reliably can a user predict whether they would be hit by the bug before upgrading? It seems like it is a problem if you can't predict it better than random guessing based on the prevalence of the bug on the forum. Quote Link to comment
pantner Posted January 24, 2013 Share Posted January 24, 2013 If you read the threads on the problem, you would find that it is limited to a small set of hardware. Most people are not having an issue with it. In fact, a goodly portion of the hardware can be made to function properly with a few simple software settings. But it seems that there are a few cases where these are not addressing the issue. Apparently, the problem is in the 32 bit kernel and when the Linux boys will get around to addressing it is anyone's guess. Yes, I saw that. But unless there was a new breakthrough, I didn't see anything that indicated what hardware is hit by the bug. The closest thing was that it seemed to be by users with "newer" hardware, though I don't know if that theory has stood up. Basically, how reliably can a user predict whether they would be hit by the bug before upgrading? It seems like it is a problem if you can't predict it better than random guessing based on the prevalence of the bug on the forum. i thought it was a specific motherboard/chipset that caused to problem? Quote Link to comment
jumperalex Posted January 24, 2013 Share Posted January 24, 2013 Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens? I'd do it, but I have no idea how. Yes - the following wiki article covers getting this running: http://lime-technology.com/wiki/index.php/Installing_unRAID_5.0_on_a_full_Slackware_Distro Note that this is not as good as unRAID itself moving to be fully 64-bit. The wiki article covers getting 32-bit unRAID running on a 64-bit slackware system Yeah just not the same as you mention. It still will hit the 4gb issue won't it? Sent from my Nexus 7 using Tapatalk HD Quote Link to comment
BRiT Posted January 24, 2013 Share Posted January 24, 2013 Out of curiosity has anyone just tried to compile Unraid with a 64-bit kernel to see what happens? I'd do it, but I have no idea how. Yes - the following wiki article covers getting this running: http://lime-technology.com/wiki/index.php/Installing_unRAID_5.0_on_a_full_Slackware_Distro Note that this is not as good as unRAID itself moving to be fully 64-bit. The wiki article covers getting 32-bit unRAID running on a 64-bit slackware system Yeah just not the same as you mention. It still will hit the 4gb issue won't it? Sent from my Nexus 7 using Tapatalk HD No. The linux system will have full access to all the memory. It wont have to do PAE, which is what seems to be partly at fault on those "slow write systems". The only bits limited would possibly be the emhttp and shfs, but you will still benefit from larger file and drive buffers. This is what vmstat and free -l outputs would look like on an 8gig system: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 70268 485604 6452364 0 0 18 16 1 1 0 0 100 0 total used free shared buffers cached Mem: 8176384 8105868 70516 0 485604 6452364 Low: 8176384 8105868 70516 High: 0 0 0 -/+ buffers/cache: 1167900 7008484 Quote Link to comment
jaybee Posted January 27, 2013 Share Posted January 27, 2013 So Tom what are you thoughts? I have not seen you reply in this thread for a while, if at all. Quote Link to comment
RockDawg Posted February 1, 2013 Share Posted February 1, 2013 I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file. With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms. Note also (in case some missed it) that Tom put some measures in place in RC9 in order to address some media playback glitch issues (although not specifically related to spin-up delays) ... http://lime-technology.com/forum/index.php?topic=25184.msg218799#msg218799 I upgraded to rc10 and the bug still exists. Quote Link to comment
RockDawg Posted February 1, 2013 Share Posted February 1, 2013 I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file. With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms. I could see that being a decent work around, but IMO it's just that - a work around. It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it. Then I would still get an interruption in playback. Tom - If you're still reading this thread, do you have any idea what the cause of this is? Quote Link to comment
limetech Posted February 1, 2013 Author Share Posted February 1, 2013 I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file. With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms. I could see that being a decent work around, but IMO it's just that - a work around. It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it. Then I would still get an interruption in playback. Tom - If you're still reading this thread, do you have any idea what the cause of this is? Sorry, I just finished reading through all 21 pages of the "X9SCM-F slow write speed, good read speed" thread, which completely cleared my brains short-term memory cache... what problem are you referring to?? Quote Link to comment
RockDawg Posted February 1, 2013 Share Posted February 1, 2013 I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file. With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms. I could see that being a decent work around, but IMO it's just that - a work around. It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it. Then I would still get an interruption in playback. Tom - If you're still reading this thread, do you have any idea what the cause of this is? Sorry, I just finished reading through all 21 pages of the "X9SCM-F slow write speed, good read speed" thread, which completely cleared my brains short-term memory cache... what problem are you referring to?? The problem where accessing a spun down drive while streaming media from unRAID results in interruptions in the playing stream. Quote Link to comment
JM2005 Posted February 1, 2013 Share Posted February 1, 2013 So RockDawg your saying you don't use a cache drive in your setup? Quote Link to comment
RockDawg Posted February 1, 2013 Share Posted February 1, 2013 No. I've never used a cache drive. Quote Link to comment
JM2005 Posted February 1, 2013 Share Posted February 1, 2013 I do use a cache drive myself and I never get slow downs or pausing of media when I awake a spun down drive to read or write to it. I just tested on 4 drives that where spun down and they all worked without causing any playback issues. I don't use the cache_dirs thing. Quote Link to comment
RockDawg Posted February 1, 2013 Share Posted February 1, 2013 So you are saying that even though you have a cache drive, you have written directly to a spun down drive while streaming from another one without the stream being interrupted at all? How would the cache drive even matter then? Quote Link to comment
JM2005 Posted February 1, 2013 Share Posted February 1, 2013 I was just letting you know how mine was working after reading the message about cache drive and cache dir thing. I was trying to see if mine would pause also and it didn't. Hopefully you will be able to find a solution. Quote Link to comment
RockDawg Posted February 1, 2013 Share Posted February 1, 2013 I appreciate the feedback. I thought you were suggesting the cache drive was the difference. My bad. I've read numerous other posts about this same issue and I've experienced it across multiple machines running numerous different versions of unRAID. You're fortunate to not have the problem. Quote Link to comment
limetech Posted February 1, 2013 Author Share Posted February 1, 2013 I used to get this until I started using cache-dirs. add-on so I suspect it may well have something to do with unRAID spinning up drives to try and determine where to put a file. With cache-dirs. active to avoid spinning up disks to read directories, and a cache-disk for writing the new files to I no longer get these symptoms. I could see that being a decent work around, but IMO it's just that - a work around. It's a good tip, but I prefer to write my files directly to specific disks so that would rule out a cache drive for me and cache-dirs isn't going to help anything when unRAID has to spin up the drive to actually write to it. Then I would still get an interruption in playback. Tom - If you're still reading this thread, do you have any idea what the cause of this is? Sorry, I just finished reading through all 21 pages of the "X9SCM-F slow write speed, good read speed" thread, which completely cleared my brains short-term memory cache... what problem are you referring to?? The problem where accessing a spun down drive while streaming media from unRAID results in interruptions in the playing stream. What m/b and disk controllers are you using? The 'spinup groups' feature was added back in the day when IDE drives were still common and SATA controllers were just 2-channel IDE controllers in disguise. On an IDE controller, if you are streaming from one drive, say the master, and you spin up the slave drive, media pauses because the channel is hanging on the spinup. This could also happen with older SATA ports. Spinup groups lets you tag such pairs or sets of drives you want to spin up/down as a group. Quote Link to comment
RockDawg Posted February 1, 2013 Share Posted February 1, 2013 My motherboard is a Supermicro X9SCM-F and my drives are running off 3 LSI M1015s. Quote Link to comment
mrlittlejeans Posted February 1, 2013 Share Posted February 1, 2013 I've never had an interuption streaming while writing to the server. It doesn't matter if I'm reading from the same disk that is being written to either. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.