PeterB Posted September 23, 2011 Share Posted September 23, 2011 I have a very strong suspicion that Tom does not earn a living wage from unRAID sales You have no way of knowing any of this Quite - which is why I say it's a suspicion. I've run my own IT company, I've developed software for own use and that software was subsequently brought to market. Those sales never showed a profit on the overall development effort, but then they were never expected, or intended, to. However, I did recover a little of my personal investment, and a significant number of people benefited from using the software. (If you still want to make guesses though, here's a hint.) Download figures for software which has been available for a number of years, and which can be used without a license fee, mean very little. The only way we could arrive at any meaningful conclusion would be to see details of license revenue, or a breakdown of license sales. As I say, I was simply voicing a suspicion. But all this is irrelevant, really. Quite so! Quote Link to comment
ibixat Posted September 23, 2011 Share Posted September 23, 2011 maybe I'm crazy but I base my software purchases on what it does when I buy it and consider anything it can do tomorrow or later through free upgrades as gravy. If it supported up to 2.2 terrabyte discs when I bought it, I find it difficult to be upset that it doesn't support 3 TB or larger 6 months later. I will be happy when it does support them and I've not had to spend anything more to get that support, but my initial purchase was for the software that supported a specific drive size and the official manual and faq when checked for 3tb support say not yet and give no eta on when, no mention of soon or anything along those lines. Now I won't argue that 14 months is a long beta. But if that's how long it's going to take that's how long it takes. In the mean time the stable 4.7 release does everything it claimed to be able to do when I bought it, and that's really all I can ask for from a product. Quote Link to comment
purko Posted September 23, 2011 Share Posted September 23, 2011 I don't know why this whole debate even started. A guy simply said, I wish there's 3TB drive support in unRaid. Well, I too wish that. That's all. Quote Link to comment
WeeboTech Posted September 23, 2011 Share Posted September 23, 2011 I don't know why this whole debate even started. A guy simply said, I wish there's 3TB drive support in unRaid. Well, I too wish that. That's all. Purko, you're a cool guy, with strong opinions. The debate started because someone said it was an unreasonable request and you said it wasn't unreasonable. limetech has already stated no new features in the 4.7 series. Have you tried the 5.x series? and if so, what is stopping it's use? Quote Link to comment
purko Posted September 23, 2011 Share Posted September 23, 2011 Have you tried the 5.x series? and if so, what is stopping it's use? Come on, Weebo, you know well what's stopping it's use: it's a beta. Quote Link to comment
WeeboTech Posted September 23, 2011 Share Posted September 23, 2011 Have you tried the 5.x series? and if so, what is stopping it's use? Come on, Weebo, you know well what's stopping it's use: it's a beta. LOL yeah, I agree, but have you tried it and found issues with it? Quote Link to comment
purko Posted September 24, 2011 Share Posted September 24, 2011 Have you tried the 5.x series? and if so, what is stopping it's use? Come on, Weebo, you know well what's stopping it's use: it's a beta. LOL yeah, I agree, but have you tried it and found issues with it? Yes, and yes. I can't ship servers with it. Quote Link to comment
gokuz Posted October 2, 2011 Share Posted October 2, 2011 I cant download from the download page. Any mirrors? Quote Link to comment
jbartlett Posted October 2, 2011 Share Posted October 2, 2011 I had no issues just now with downloading the most recent beta or 4.7. Quote Link to comment
gokuz Posted October 2, 2011 Share Posted October 2, 2011 yup they just fixed it. Thanks Quote Link to comment
MortenSchmidt Posted November 29, 2011 Share Posted November 29, 2011 Tom has disclosed a major bug in 4.7 http://lime-technology.com/forum/index.php?topic=13866.0 I believe I've encountered this bug see here: http://lime-technology.com/forum/index.php?topic=12884.msg132178#msg132178 So, where is 4.7.1? It's been 4½ months now, and honestly that's just about 4½ months too long to fix a bug of this severity in the "stable" release branch. I have a drive that's starting to reallocate sectors now and want to rebuild it. Not a happy camper here. Quote Link to comment
olympia Posted November 29, 2011 Share Posted November 29, 2011 I see this regularly on disk upgrades too. Would be great if this could be fixed in the 4.7 line. Quote Link to comment
ohlwiler Posted November 29, 2011 Share Posted November 29, 2011 I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy. Quote Link to comment
WeeboTech Posted November 29, 2011 Share Posted November 29, 2011 I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy. Does this bug affect parity rebuilds too? Quote Link to comment
abs0lut.zer0 Posted November 29, 2011 Share Posted November 29, 2011 Tom has disclosed a major bug in 4.7 http://lime-technology.com/forum/index.php?topic=13866.0 I believe I've encountered this bug see here: http://lime-technology.com/forum/index.php?topic=12884.msg132178#msg132178 So, where is 4.7.1? It's been 4½ months now, and honestly that's just about 4½ months too long to fix a bug of this severity in the "stable" release branch. I have a drive that's starting to reallocate sectors now and want to rebuild it. Not a happy camper here. +1 is there ever gonna be a 4.7.1 or has the 4.x series been abandoned and we have to upgrade to the 5.x series to fix this problem even though it's not stable as said by limetech itself. Quote Link to comment
Joe L. Posted November 29, 2011 Share Posted November 29, 2011 I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy. Does this bug affect parity rebuilds too? It would affect any rebuild, but a parity rebuild, followed by a parity "check" would detect it, and correct parity. The problem is when re-constructing a data drive, as there is no equivalent "check" As stated, the work-around is NOT to write to a data drive when re-constructing that drive in the array. Quote Link to comment
Joe L. Posted November 29, 2011 Share Posted November 29, 2011 I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy. Does this bug affect parity rebuilds too? It would affect any rebuild, but a parity rebuild, followed by a parity "check" would detect it, and correct parity. The problem is when re-constructing a data drive, as there is no equivalent "check" As stated, the work-around is NOT to write to a data drive when re-constructing that drive in the array. There is second, equally serious bug in the 4.7 version of unRAID as shown here: http://lime-technology.com/forum/index.php?topic=16523.0 and here http://lime-technology.com/forum/index.php?topic=16471.0 and here: http://lime-technology.com/forum/index.php?topic=15385.0 Attempting to re-construct a super.dat file will result in the MBR of existing data drives being re-written, often pointing to the wrong starting sector. The result, drives that show as un-formatted (until the partitioning is corrected in the MBR) and a potential loss of all data if the unRAID owner does something on their own that wipes the drive. An un-writable super.dat, or a complete replacement of the flash drive will result in this bug showing itself in the 4.7 series. Joe L. Quote Link to comment
jbartlett Posted November 29, 2011 Share Posted November 29, 2011 A safe bet would be to unplug your LAN cable to prevent against changes. Quote Link to comment
abs0lut.zer0 Posted November 29, 2011 Share Posted November 29, 2011 I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy. Does this bug affect parity rebuilds too? It would affect any rebuild, but a parity rebuild, followed by a parity "check" would detect it, and correct parity. The problem is when re-constructing a data drive, as there is no equivalent "check" As stated, the work-around is NOT to write to a data drive when re-constructing that drive in the array. There is second, equally serious bug in the 4.7 version of unRAID as shown here: http://lime-technology.com/forum/index.php?topic=16523.0 and here http://lime-technology.com/forum/index.php?topic=16471.0 and here: http://lime-technology.com/forum/index.php?topic=15385.0 Attempting to re-construct a super.dat file will result in the MBR of existing data drives being re-written, often pointing to the wrong starting sector. The result, drives that show as un-formatted (until the partitioning is corrected in the MBR) and a potential loss of all data if the unRAID owner does something on their own that wipes the drive. An un-writable super.dat, or a complete replacement of the flash drive will result in this bug showing itself in the 4.7 series. Joe L. once again are these issues being addressed or must we to wait for the 5.x to become stable ? Quote Link to comment
Dougy Posted November 30, 2011 Share Posted November 30, 2011 Tom has disclosed a major bug in 4.7 http://lime-technology.com/forum/index.php?topic=13866.0 I believe I've encountered this bug see here: http://lime-technology.com/forum/index.php?topic=12884.msg132178#msg132178 So, where is 4.7.1? It's been 4½ months now, and honestly that's just about 4½ months too long to fix a bug of this severity in the "stable" release branch. I have a drive that's starting to reallocate sectors now and want to rebuild it. Not a happy camper here. It annoys me when people constantly complain about 5 still being in beta. However this concerns me greatly. It seems that this was fixed in 5 beta 8. How can can we justify 6 more beta releases without the promised update to 4.7 that was previously believed to be stable??? somehow priorities seem to a bit skewed here.... Quote Link to comment
stomp Posted November 30, 2011 Share Posted November 30, 2011 Just wait for a stable 5.x and stop whining. Do you have ever see a "stable" version of a software without bugs? No. Tom's now working to stabilize 5.x, no point to go back to 4.x and release a new build. You know the bug, deal with it. Quote Link to comment
WeeboTech Posted November 30, 2011 Share Posted November 30, 2011 You know the bug, deal with it. Maybe so, but potential new customers don't. Keep in mind, the mere act of mounting a file system creates a write on the file system. If you are recovering from some other issue and the file system is replaying transactions, that is more writes. I suppose if we knew what the driver changes were, we could do it ourselves. Still, this should be tidied up if a crucial bug exists in a stable production release version. This is one situation where I stand with those wanting a fix. Quote Link to comment
Joe L. Posted November 30, 2011 Share Posted November 30, 2011 You know the bug, deal with it. Maybe so, but potential new customers don't. Keep in mind, the mere act of mounting a file system creates a write on the file system. If you are recovering from some other issue and the file system is replaying transactions, that is more writes. I suppose if we knew what the driver changes were, we could do it ourselves. Still, this should be tidied up if a crucial bug exists in a stable production release version. This is one situation where I stand with those wanting a fix. I think this old patch to the "md" code describes the issue: http://www.spinics.net/lists/raid/msg33994.html Although there are many changes in the unRAID "md" driver code between the 4.7 version and the 5.0 versions, the old and new lines equivalent to the patch described on spincs.net are Old code in unraid.c /* If we're trying to read a failed disk, then we must read * parity and all the "other" disks and compute it. */ if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) && !buff_uptodate(col) && !buff_locked(col)) { if (disk_valid( col)) { dprintk("Reading col %d (sync=%d)\n", i, syncing); set_buff_locked( col); locked++; set_bit(MD_BUFF_READ, &col->state); } else if (uptodate == disks-1) { dprintk("Computing col %d\n", i); compute_block(sh, i); /* also sets it Uptodate */ uptodate++; /* if failed disk is enabled, write it */ if (disk_enabled( col)) { dprintk("Writing reconstructed failed col %d\n", i); set_buff_locked( col); locked++; set_bit(MD_BUFF_WRITE, &col->state); } /* this stripe is also now in-sync */ if (syncing) set_bit(STRIPE_INSYNC, &sh->state); } } New code in unraid.c in the 5.0beta series (note, the lines noted above in RED are removed. It is not assumed the stripe is in sync.: /* If we're trying to read a failed disk, then we must read * parity and all the "other" disks and compute it. * Note: if (failed > 1) there won't be any reads posted to a * failed drive because they would have been terminated above. */ if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) && !buff_uptodate(col) && !buff_locked(col)) { if (disk_valid( col)) { dprintk("Reading col %d (sync=%d)\n", i, syncing); set_buff_locked( col); locked++; set_bit(MD_BUFF_READ, &col->state); } else if (uptodate == disks-1) { dprintk("Computing col %d\n", i); compute_block(sh, i); /* also sets it Uptodate */ uptodate++; /* if failed disk is enabled, write it */ if (disk_enabled( col)) { dprintk("Writing reconstructed failed col %d\n", i); set_buff_locked( col); locked++; set_bit(MD_BUFF_WRITE, &col->state); } } } Joe L. Quote Link to comment
abs0lut.zer0 Posted November 30, 2011 Share Posted November 30, 2011 You know the bug, deal with it. Maybe so, but potential new customers don't. Keep in mind, the mere act of mounting a file system creates a write on the file system. If you are recovering from some other issue and the file system is replaying transactions, that is more writes. I suppose if we knew what the driver changes were, we could do it ourselves. Still, this should be tidied up if a crucial bug exists in a stable production release version. This is one situation where I stand with those wanting a fix. I think this old patch to the "md" code describes the issue: http://www.spinics.net/lists/raid/msg33994.html Although there are many changes in the unRAID "md" driver code between the 4.7 version and the 5.0 versions, the old and new lines equivalent to the patch described on spincs.net are Old code in unraid.c /* If we're trying to read a failed disk, then we must read * parity and all the "other" disks and compute it. */ if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) && !buff_uptodate(col) && !buff_locked(col)) { if (disk_valid( col)) { dprintk("Reading col %d (sync=%d)\n", i, syncing); set_buff_locked( col); locked++; set_bit(MD_BUFF_READ, &col->state); } else if (uptodate == disks-1) { dprintk("Computing col %d\n", i); compute_block(sh, i); /* also sets it Uptodate */ uptodate++; /* if failed disk is enabled, write it */ if (disk_enabled( col)) { dprintk("Writing reconstructed failed col %d\n", i); set_buff_locked( col); locked++; set_bit(MD_BUFF_WRITE, &col->state); } /* this stripe is also now in-sync */ if (syncing) set_bit(STRIPE_INSYNC, &sh->state); } } New code in unraid.c in the 5.0beta series (note, the lines noted above in RED are removed. It is not assumed the stripe is in sync.: /* If we're trying to read a failed disk, then we must read * parity and all the "other" disks and compute it. * Note: if (failed > 1) there won't be any reads posted to a * failed drive because they would have been terminated above. */ if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) && !buff_uptodate(col) && !buff_locked(col)) { if (disk_valid( col)) { dprintk("Reading col %d (sync=%d)\n", i, syncing); set_buff_locked( col); locked++; set_bit(MD_BUFF_READ, &col->state); } else if (uptodate == disks-1) { dprintk("Computing col %d\n", i); compute_block(sh, i); /* also sets it Uptodate */ uptodate++; /* if failed disk is enabled, write it */ if (disk_enabled( col)) { dprintk("Writing reconstructed failed col %d\n", i); set_buff_locked( col); locked++; set_bit(MD_BUFF_WRITE, &col->state); } } } Joe L. Sorry Joe L. but for those of us that are not coders what does this mean in english ... Do not take this wrong there was no sarcasm intended just merely wanna understand too.. thanks Quote Link to comment
Joe L. Posted November 30, 2011 Share Posted November 30, 2011 Sorry Joe L. but for those of us that are not coders what does this mean in english ... Do not take this wrong there was no sarcasm intended just merely wanna understand too.. thanks basically, the old code assumed when constructing a stripe of data to be written to a disk , ( when either initially calculating parity, or re-constructing a replaced disk) that no other "writes" had also been made to that same stripe. Instead it used only what was calculated from the other disks being used in the calculation. It zeroed out an indicator that might have been set if a change had been made to that same stripe of data by an actual write to the array. (the actual write would then never occur) To have the bug affect you, you would have to write to the exact same set of blocks (a stripe) as being calculated at that specific moment. As mentioned in some other thread, this bug has been in the "md" driver in all versions of linux for years. Joe L. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.