unRAID Server Release 4.7 "final" Available


limetech

Recommended Posts

I have a very strong suspicion that Tom does not earn a living wage from unRAID sales

You have no way of knowing any of this

 

Quite - which is why I say it's a suspicion.

 

I've run my own IT company, I've developed software for own use and that software was subsequently brought to market.  Those sales never showed a profit on the overall development effort, but then they were never expected, or intended, to.  However, I did recover a little of my personal investment, and a significant number of people benefited from using the software.

 

(If you still want to make guesses though, here's a hint.)

 

Download figures for software which has been available for a number of years, and which can be used without a license fee, mean very little.  The only way we could arrive at any meaningful conclusion would be to see details of license revenue, or a breakdown of license sales.  As I say, I was simply voicing a suspicion.

 

But all this is irrelevant, really.

 

Quite so!

Link to comment
  • Replies 414
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

maybe I'm crazy but I base my software purchases on what it does when I buy it and consider anything it can do tomorrow or later through free upgrades as gravy.  If it supported up to 2.2 terrabyte discs when I bought it, I find it difficult to be upset that it doesn't support 3 TB or larger 6 months later.  I will be happy when it does support them and I've not had to spend anything more to get that support, but my initial purchase was for the software that supported a specific drive size and the official manual and faq when checked for 3tb support say not yet and give no eta on when, no mention of soon or anything along those lines.

 

Now I won't argue that 14 months is a long beta. But if that's how long it's going to take that's how long it takes.  In the mean time the stable 4.7 release does everything it claimed to be able to do when I bought it, and that's really all I can ask for from a product.

Link to comment

I don't know why this whole debate even started.  A guy simply said, I wish there's 3TB drive support in unRaid.  Well, I too wish that.  That's all.

 

Purko, you're a cool guy, with strong opinions.

The debate started because someone said it was an unreasonable request and you said it wasn't unreasonable.

limetech has already stated no new features in the 4.7 series.

 

Have you tried the 5.x series? and if so, what is stopping it's use?

Link to comment
  • 2 weeks later...
  • 1 month later...

Tom has disclosed a major bug in 4.7 http://lime-technology.com/forum/index.php?topic=13866.0

 

I believe I've encountered this bug see here: http://lime-technology.com/forum/index.php?topic=12884.msg132178#msg132178

 

So, where is 4.7.1? It's been 4½ months now, and honestly that's just about 4½ months too long to fix a bug of this severity in the "stable" release branch. I have a drive that's starting to reallocate sectors now and want to rebuild it. Not a happy camper here.

Link to comment

I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy.

Link to comment

I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy.

 

Does this bug affect parity rebuilds too?

Link to comment

Tom has disclosed a major bug in 4.7 http://lime-technology.com/forum/index.php?topic=13866.0

 

I believe I've encountered this bug see here: http://lime-technology.com/forum/index.php?topic=12884.msg132178#msg132178

 

So, where is 4.7.1? It's been 4½ months now, and honestly that's just about 4½ months too long to fix a bug of this severity in the "stable" release branch. I have a drive that's starting to reallocate sectors now and want to rebuild it. Not a happy camper here.

 

+1

 

is there ever gonna be a 4.7.1 or has the 4.x series been abandoned and we have to upgrade to the 5.x series to fix this problem even though it's not stable as said by limetech itself.

Link to comment

I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy.

 

Does this bug affect parity rebuilds too?

It would affect any rebuild, but a parity rebuild, followed by a parity "check" would detect it, and correct parity.

 

The problem is when re-constructing a data drive, as there is no equivalent "check"

 

As stated, the work-around is NOT to write to a data drive when re-constructing that drive in the array.

Link to comment

I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy.

 

Does this bug affect parity rebuilds too?

It would affect any rebuild, but a parity rebuild, followed by a parity "check" would detect it, and correct parity.

 

The problem is when re-constructing a data drive, as there is no equivalent "check"

 

As stated, the work-around is NOT to write to a data drive when re-constructing that drive in the array.

There is second, equally serious bug in the 4.7 version of unRAID as shown here:

http://lime-technology.com/forum/index.php?topic=16523.0

and here

http://lime-technology.com/forum/index.php?topic=16471.0

and here:

http://lime-technology.com/forum/index.php?topic=15385.0

 

Attempting to re-construct a super.dat file will result in the MBR of existing data drives being re-written, often pointing to the wrong starting sector.  The result, drives that show as un-formatted (until the partitioning is corrected in the MBR) and a potential loss of all data if the unRAID owner does something on their own that wipes the drive.

 

An un-writable super.dat, or a complete replacement of the flash drive will result in this bug showing itself in the 4.7 series.

 

Joe L.

Link to comment

I too have been wanting this fix as I run two 4.7 servers and I don't want to move to 5.0. In the mean time, I strictly follow one rule: If a disk rebuild or disk upgrade is in progress I do not write to the array. The probability of the bug being triggered is small, but the severity is great, so luckily the work around is easy.

 

Does this bug affect parity rebuilds too?

It would affect any rebuild, but a parity rebuild, followed by a parity "check" would detect it, and correct parity.

 

The problem is when re-constructing a data drive, as there is no equivalent "check"

 

As stated, the work-around is NOT to write to a data drive when re-constructing that drive in the array.

There is second, equally serious bug in the 4.7 version of unRAID as shown here:

http://lime-technology.com/forum/index.php?topic=16523.0

and here

http://lime-technology.com/forum/index.php?topic=16471.0

and here:

http://lime-technology.com/forum/index.php?topic=15385.0

 

Attempting to re-construct a super.dat file will result in the MBR of existing data drives being re-written, often pointing to the wrong starting sector.  The result, drives that show as un-formatted (until the partitioning is corrected in the MBR) and a potential loss of all data if the unRAID owner does something on their own that wipes the drive.

 

An un-writable super.dat, or a complete replacement of the flash drive will result in this bug showing itself in the 4.7 series.

 

Joe L.

 

once again are these issues being addressed or must we to wait for the 5.x to become stable ?

Link to comment

Tom has disclosed a major bug in 4.7 http://lime-technology.com/forum/index.php?topic=13866.0

 

I believe I've encountered this bug see here: http://lime-technology.com/forum/index.php?topic=12884.msg132178#msg132178

 

So, where is 4.7.1? It's been 4½ months now, and honestly that's just about 4½ months too long to fix a bug of this severity in the "stable" release branch. I have a drive that's starting to reallocate sectors now and want to rebuild it. Not a happy camper here.

 

It annoys me when people constantly complain about 5 still being in beta.  However this concerns me greatly.  It seems that this was fixed in 5 beta 8.  How can can we justify 6 more beta releases without the promised update to 4.7 that was previously believed to be stable???  somehow priorities seem to a bit skewed here.... 

Link to comment

You know the bug, deal with it.

 

Maybe so, but potential new customers don't.

 

Keep in mind, the mere act of mounting a file system creates a write on the file system.

If you are recovering from some other issue and the file system is replaying transactions, that is more writes.

 

I suppose if we knew what the driver changes were, we could do it ourselves.

Still, this should be tidied up if a crucial bug exists in a stable production release version.

 

This is one situation where I stand with those wanting a fix.

 

Link to comment

You know the bug, deal with it.

 

Maybe so, but potential new customers don't.

 

Keep in mind, the mere act of mounting a file system creates a write on the file system.

If you are recovering from some other issue and the file system is replaying transactions, that is more writes.

 

I suppose if we knew what the driver changes were, we could do it ourselves.

Still, this should be tidied up if a crucial bug exists in a stable production release version.

 

This is one situation where I stand with those wanting a fix.

 

I think this old patch to the "md" code describes the issue:

http://www.spinics.net/lists/raid/msg33994.html

 

Although there are many changes in the unRAID "md" driver code between the 4.7 version and the 5.0 versions, the old and new lines equivalent to the patch described on spincs.net are

Old code in unraid.c

/* If we're trying to read a failed disk, then we must read

* parity and all the "other" disks and compute it.

*/

if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) &&

   !buff_uptodate(col) && !buff_locked(col)) {

if (disk_valid( col)) {

dprintk("Reading col %d (sync=%d)\n", i, syncing);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_READ, &col->state);

}

else if (uptodate == disks-1) {

dprintk("Computing col %d\n", i);

compute_block(sh, i); /* also sets it Uptodate */

uptodate++;

 

/* if failed disk is enabled, write it */

if (disk_enabled( col)) {

dprintk("Writing reconstructed failed col %d\n", i);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_WRITE, &col->state);

}

 

/* this stripe is also now in-sync */

if (syncing)

set_bit(STRIPE_INSYNC, &sh->state);

}

}

 

New code in unraid.c in the 5.0beta series  (note, the lines noted above in RED are removed.  

It is not assumed the stripe is in sync.:

/* If we're trying to read a failed disk, then we must read

* parity and all the "other" disks and compute it.

* Note: if (failed > 1) there won't be any reads posted to a

* failed drive because they would have been terminated above.

*/

if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) &&

   !buff_uptodate(col) && !buff_locked(col)) {

if (disk_valid( col)) {

dprintk("Reading col %d (sync=%d)\n", i, syncing);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_READ, &col->state);

}

else if (uptodate == disks-1) {

dprintk("Computing col %d\n", i);

compute_block(sh, i); /* also sets it Uptodate */

uptodate++;

 

/* if failed disk is enabled, write it */

if (disk_enabled( col)) {

dprintk("Writing reconstructed failed col %d\n", i);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_WRITE, &col->state);

}

}

}

 

Joe L.

Link to comment

You know the bug, deal with it.

 

Maybe so, but potential new customers don't.

 

Keep in mind, the mere act of mounting a file system creates a write on the file system.

If you are recovering from some other issue and the file system is replaying transactions, that is more writes.

 

I suppose if we knew what the driver changes were, we could do it ourselves.

Still, this should be tidied up if a crucial bug exists in a stable production release version.

 

This is one situation where I stand with those wanting a fix.

 

I think this old patch to the "md" code describes the issue:

http://www.spinics.net/lists/raid/msg33994.html

 

Although there are many changes in the unRAID "md" driver code between the 4.7 version and the 5.0 versions, the old and new lines equivalent to the patch described on spincs.net are

Old code in unraid.c

/* If we're trying to read a failed disk, then we must read

* parity and all the "other" disks and compute it.

*/

if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) &&

   !buff_uptodate(col) && !buff_locked(col)) {

if (disk_valid( col)) {

dprintk("Reading col %d (sync=%d)\n", i, syncing);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_READ, &col->state);

}

else if (uptodate == disks-1) {

dprintk("Computing col %d\n", i);

compute_block(sh, i); /* also sets it Uptodate */

uptodate++;

 

/* if failed disk is enabled, write it */

if (disk_enabled( col)) {

dprintk("Writing reconstructed failed col %d\n", i);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_WRITE, &col->state);

}

 

/* this stripe is also now in-sync */

if (syncing)

set_bit(STRIPE_INSYNC, &sh->state);

}

}

 

New code in unraid.c in the 5.0beta series  (note, the lines noted above in RED are removed.  

It is not assumed the stripe is in sync.:

/* If we're trying to read a failed disk, then we must read

* parity and all the "other" disks and compute it.

* Note: if (failed > 1) there won't be any reads posted to a

* failed drive because they would have been terminated above.

*/

if ((col->read_bi || (failed && sh->col[failed_num].read_bi)) &&

   !buff_uptodate(col) && !buff_locked(col)) {

if (disk_valid( col)) {

dprintk("Reading col %d (sync=%d)\n", i, syncing);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_READ, &col->state);

}

else if (uptodate == disks-1) {

dprintk("Computing col %d\n", i);

compute_block(sh, i); /* also sets it Uptodate */

uptodate++;

 

/* if failed disk is enabled, write it */

if (disk_enabled( col)) {

dprintk("Writing reconstructed failed col %d\n", i);

set_buff_locked( col);

locked++;

set_bit(MD_BUFF_WRITE, &col->state);

}

}

}

 

Joe L.

 

 

Sorry Joe L. but for those of us that are not coders what does this mean in english ...

Do not take this wrong there was no sarcasm intended just merely wanna understand too..

 

thanks

Link to comment

Sorry Joe L. but for those of us that are not coders what does this mean in english ...

Do not take this wrong there was no sarcasm intended just merely wanna understand too..

 

thanks

basically, the old code assumed when constructing a stripe of data to be written to a disk ,

( when either initially calculating parity, or re-constructing a replaced disk)

that no other "writes" had also been made to that same stripe.  

 

Instead it used only what was calculated from the other disks being used in the calculation.  It zeroed out an indicator that might have been set if a change had been made to that same stripe of data by an actual write to the array.  (the actual write would then never occur)

 

To have the bug affect you, you would have to write to the exact same set of blocks (a stripe) as being calculated at that specific moment.   As mentioned in some other thread, this bug has been in the "md" driver in all versions of linux for years.

 

Joe L.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.