Your Chance to Chime In


limetech

Recommended Posts

I'm with you, I need 3GB support. I'm tired of buying additional drives when I've got 3GB drives already in the array with 1/3 of their space being unutilized. Plus, the longer this goes on and the more 3GB drives I use as 2GB drives, the more time it is going to take me to convert them all to 3GB drives to recover the lost space. I'm really not looking forward to that as it is.

 

Look, the people with that hardware or thinking of buying that hardware are already SOL and need to wait for a solution. Why make all of us wait? Release 5.0 with a disclaimer for the known issue please.

 

3GB drives are fully supported in v4.7, as well as 4GB, and all drives up to 2000GB;-)

 

Touche :)

Link to comment
  • Replies 301
  • Created
  • Last Reply

Top Posters In This Topic

Using RC10, I deleted super.dat after assigning disks to the array, as well as copying over the contents of the config directory from the install package.  Each time I rebooted the server there were no disks assigned to the array, but there were never any other problems. Whether the partitions started on sector 63 or 64 did not appear to matter. I tested it several different ways with drives less than 2TB, but wasn't able to reproduce the bug as described.

That is good news indeed.

 

Well ... isn't good news first replicating the problem in [not-RC10], then finding that the same steps do not replicate it in RC10?

 

That was my first thought. Just not having it happen on one system doesn't prove it won't happen. Start with 4.7. If you find a combination that creates the failure then repeat the exact same test in RC10.

 

The following using 4.7 did not cause any errors:

 

1. downgrading the existing RC10 array to clean install of 4.7

2. creating an array and then deleting super.dat and rebooting

3. creating an array and replacing the contents of the config directory with the install files

4. deleting disk.cfg, super.dat and then cutting the power to the server during a parity sync

 

In all those cases, I was able to assign the disks to the correct slots, start the array and see my files.  I'm going to test a couple more things involving an unclean power down by flipping the switch on the power supply to see if that is the cause, as simply deleting the files doesn't appear to reproduce the but.

 

Of course if I switch a parity and data drive assignments and start the array, then the parity drive is unformatted and the data drive is overwritten, but isn't that expected?

Link to comment

 

The following using 4.7 did not cause any errors:

 

1. downgrading the existing RC10 array to clean install of 4.7

2. creating an array and then deleting super.dat and rebooting

3. creating an array and replacing the contents of the config directory with the install files

4. deleting disk.cfg, super.dat and then cutting the power to the server during a parity sync

 

In all those cases, I was able to assign the disks to the correct slots, start the array and see my files.  I'm going to test a couple more things involving an unclean power down by flipping the switch on the power supply to see if that is the cause, as simply deleting the files doesn't appear to reproduce the but.

 

Of course if I switch a parity and data drive assignments and start the array, then the parity drive is unformatted and the data drive is overwritten, but isn't that expected?

 

Ballsy.  8)

Link to comment

The following using 4.7 did not cause any errors:

 

1. downgrading the existing RC10 array to clean install of 4.7

2. creating an array and then deleting super.dat and rebooting

3. creating an array and replacing the contents of the config directory with the install files

4. deleting disk.cfg, super.dat and then cutting the power to the server during a parity sync

 

In all those cases, I was able to assign the disks to the correct slots, start the array and see my files.  I'm going to test a couple more things involving an unclean power down by flipping the switch on the power supply to see if that is the cause, as simply deleting the files doesn't appear to reproduce the but.

 

Of course if I switch a parity and data drive assignments and start the array, then the parity drive is unformatted and the data drive is overwritten, but isn't that expected?

If you are in the mood to try more scenarios to break things, you need to set 1 data drive to mbr aligned, and another to mbr unaligned (64k and 63k), then repeat your testing with 2 more cases, one being to set the default to aligned before deleting the config, and another unaligned then delete the config. In one of those instances, I'm pretty sure one of the data drives will have its mbr rewritten, invalidating the partition and failing to mount.
Link to comment

Try this process in your testing please.

 

1. preclear drives to start at sector 63

2. add drive(s) to 4.7 array

3. powerdown server

4. install on flash drive like new system

5. start up the machine with the new flash drive

 

 

Do the above except for step 3 just pull the power

 

I've tested that scenario except having 2 drives using sector 63 and 1 using sector 64.  I've tried quite a few things, and while I could force the array to start with the wrong drives, I've never been able to reproduce the unformatted disk error.  I'm preclearing all the drives and may do some more tests later.  The following did not cause problems:

 

1. running rm -rf * from /boot or / while the array was running then and booting with a fresh install

2. cutting power during a parity sync or while copying data

3. upgrading an existing array to RC10 or downgrading to 4.7

4. pulling the flash drive from a running system and then cutting power

5. deleting files in the config directory or setting them to read-only

6. changing the default partition from 63 to 64, deleting the disk assignments and rebooting

7. intentionally assigning disks to the wrong slots (but not starting the array) and then cutting power to the server

8. editing super.dat with garbage data and deleting disk.cfg

9. deleting the config directory (just didn't boot)

 

Link to comment

For my .02. 

 

I think it makes the most sense to release as is with a disclaimer about the specific know hardware issues.  The fact is it works fine for everyone else and I think will be good for the future of the platform as future potential users will see 3TB support in a stable release.  I cant tell you how many posts i have read in various forums about people going to other platforms due to the lack of 3TB support in a stable release.  While I made the jump when you hit the RC's many people won't look past the name even though it works I their case.

 

This approach does not slow down the permanent fix for 5.1 or 6.0 and I think be good in the long run, as long a the temporary limitation is clear. 

 

Beyond that I would take the fastest solution to a stable release that removes the limitations before incorporating any further changes.  Cool new features will makes a smaller group of users happy, but stability wins the race for the future of this platform.  I am a believer in it and want to see it succeed.

Link to comment

I haven't read most of the 11 pages in response to this, but here's what I think:

 

- I'm still using 5.0 beta14 because it was what was released at the time of my hardware upgrade. There was no update for about 6 months, and most of the issues people had were worked around and then merged into the later unraid versions.

- I'm a subscriber to these forums and I've noticed that the most latest releases have had no major features added to them, but are mostly stability fixes (and I haven't had stability problems, so didn't bother with them.)

- Even in the latest release there's a problem for a small number of users, but as it's said, this is on new hardware, people don't tend to load old releases with new machines so everyone who will have the problem potentially does already!

- If you mark rc10 as final, I, and probably the other few hundred people who are waiting, will upgrade and you'll be flooded with bug reports to put into 5.0.1 and that will no doubt will turn into a super-stable release that should have been 5.0, but couldn't because you can't make people upgrade to something *marked* as unstable.

 

Anyway, work on the 64 bit kernel in the background, mark rc10 as final, make a big deal about running the permissions tool (big header in the web interface and bash logon message) and get ready to iron out the bugs that will flood in :)

 

Great work btw, Tom. Keep it up!

Link to comment

i think releasing a 32bit 5.0 final now would quieten down the people bitching about the lack of the word "final" in the release name, then you can concentrate on slowly releasing beta/RC releases of a new 64bit edition (ver 5.1?) to get around the slow write issue. i think to basically scrap all the good testing you've had done on 5.0 so far by radically changing over to 64bit kernel would be a bad move, just my thoughts, i know some people will strongly disagree.

Link to comment

Get 5.0 out now as is.

 

I caveat this with one concern. We have been in RC for so long and with so much forum noise most/many people have long since stopped considering RC as release candidates. This means when 5.0 comes out we will almost certainly find a load of new edge case issues as a lump of new testers come on board. We can mitigate this with timing. Once 5.0 is out we should gear up to capture new issues in a slick way and close them with point releases. If this means 20 point releases in 20 days so be it. The end result will be a close to this perpetual "where is it" noise.

 

Personally once that is done i would like to see 64bit unRAID. My focus is on using more memory and whilst PAE is a good kludge it is a kludge none the less. We spend thousands on unRAID hardware and disks and now days 16 or 32GB or RAM is a trivial cost.

Link to comment

not just "point releases" but call them nightlies or weeklies and put them out that frequently.  Then the brave testers can run with them to help evaluate the good-/bad-ness of the code.  Meanwhile the stable release chuggs along with less frequent merges of the code commits meant for the masses.

 

and in fairness, I think this is what someone else in this thread suggested but using different terminology and I poo-pooed it ... sorry, I was not in the right frame of mind :-\

Link to comment

Get 5.0 out now as is.

 

I caveat this with one concern. We have been in RC for so long and with so much forum noise most/many people have long since stopped considering RC as release candidates. This means when 5.0 comes out we will almost certainly find a load of new edge case issues as a lump of new testers come on board. We can mitigate this with timing. Once 5.0 is out we should gear up to capture new issues in a slick way and close them with point releases. If this means 20 point releases in 20 days so be it. The end result will be a close to this perpetual "where is it" noise.

 

Personally once that is done i would like to see 64bit unRAID. My focus is on using more memory and whilst PAE is a good kludge it is a kludge none the less. We spend thousands on unRAID hardware and disks and now days 16 or 32GB or RAM is a trivial cost.

 

Yup. 5.0 now. 5.1 can introduce 64bit unRAID.

 

I *STILL* have parity check issues on anything after Beta12a though. I've tried those write *fixes* in that thread but they didn't fix it. It's really annoying to have 24 hours parity checks, when beta12a gives me 8 hour parity checks. Both of my servers are identical and both have this issue.

 

Parts in sig.

Link to comment

Tom,

 

Is there a deadline for this chimeing in? Maybe it is wise to make one.

I voted for one of the standard options and am wondering what the final desiscion is going to be. Looking at the poll, 5.0 final should be there very soon.

 

Good luck with the development of version 5.1 or the 64 bit 6.0  beta versions and of course renaming RC10 to 5.0 final ;-)

 

 

 

Link to comment

myself, i'd prefer to seee 5.0 go final as is and future dev as this:

 

 

- 5.0.x for bug fixes that *may* pop up as more users jump in the 5.0 waters.

 

- 5.1, 5.2, 5.3, etc slated for feature additions for ease of use (i.e. built in disk diag tools, rieserfs --check commands, etc.)

 

- 6.0 for 64bit.

 

the people using crazy amounts of ram are greatly exceeding the requirements for UNRAID. the ONLY reason for this much ram is for add-ons and plug-ins. IMO development should not cater to those users, but to the CORE functionality of unRAID and helping users diagnose their systems from the webGUI.

 

 

my .02.

Link to comment

Try this process in your testing please.

 

1. preclear drives to start at sector 63

2. add drive(s) to 4.7 array

3. powerdown server

4. install on flash drive like new system

5. start up the machine with the new flash drive

 

 

Do the above except for step 3 just pull the power

 

I ran this exact scenario with no data loss:

 

1. new install of 4.7, preclear 4 drives starting on sector 63

2. add 3 drives to array, start array, add data to both disk 1 & 2

3. pull plug from power supply

4. fresh install of 4.7 on USB thumb drive

5. start up server, assign disks and start array

 

I'm not saying this bug cannot occur, just that I can't re-create it with these steps.

Link to comment

Sounds like this is not a likely issue with 4.7;  but based on what I understand the issue was, I think you need two more steps in the test.  AFTER the  fresh install of 4.7 (step 4);  but BEFORE assigning the disks, you should check the box for defaulting to the 64th sector;  then delete the super.dat file;  and THEN rebooting and assigning the disks.

 

But based on your tests without these steps, it certainly seems that a v4.7 system with all sector 63 disks is not likely to have any issues even with a rebuilt flash drive.

 

Link to comment

Try this process in your testing please.

 

1. preclear drives to start at sector 63

2. add drive(s) to 4.7 array

3. powerdown server

4. install on flash drive like new system

5. start up the machine with the new flash drive

 

 

Do the above except for step 3 just pull the power

 

I ran this exact scenario with no data loss:

 

1. new install of 4.7, preclear 4 drives starting on sector 63

2. add 3 drives to array, start array, add data to both disk 1 & 2

3. pull plug from power supply

4. fresh install of 4.7 on USB thumb drive

5. start up server, assign disks and start array

 

I'm not saying this bug cannot occur, just that I can't re-create it with these steps.

 

 

what does 4.7 default to? 63 or 64? preclear the opposite of default then install a new 4.7 so it reverts to default again.

Link to comment

Ballsy.  8)

 

Test system only.  I happened to have some unused but precleared drives sitting around, and a motherboard I was already planning to test for a new 5.x install.

 

Dave_m

 

You are a real saint!  I can't believe the amount of time that you are devoting testing all of these permutations that folks are dreaming up.  And I have not seen many respond to you with a '"Thank YOU".  But I will say it to you---- THANK YOU!!!!

Link to comment

Ballsy.  8)

 

Test system only.  I happened to have some unused but precleared drives sitting around, and a motherboard I was already planning to test for a new 5.x install.

 

Dave_m

 

You are a real saint!  I can't believe the amount of time that you are devoting testing all of these permutations that folks are dreaming up.  And I have not seen many respond to you with a '"Thank YOU".  But I will say it to you---- THANK YOU!!!!

 

While I've not been affected by this particular issue, I too would like to say thanks.  Anything that can help to resolve this issue, or determine what causes it is greatly appreciated. 

Link to comment

...

 

the people using crazy amounts of ram are greatly exceeding the requirements for UNRAID. the ONLY reason for this much ram is for add-ons and plug-ins. IMO development should not cater to those users, but to the CORE functionality of unRAID and helping users diagnose their systems from the webGUI.

...

 

I dont run any plug ins except cache_dirs. Does that make me crazy for wanting more non PAE RAM?

 

Or do I just want more RAM caches i.e. the things that make the biggest perceptible change for me

Link to comment

...

 

the people using crazy amounts of ram are greatly exceeding the requirements for UNRAID. the ONLY reason for this much ram is for add-ons and plug-ins. IMO development should not cater to those users, but to the CORE functionality of unRAID and helping users diagnose their systems from the webGUI.

...

so you DO run plugins then.

 

dont get me wrong, i run plug ins myself, but i only have 3gb ram installed.

 

at some point more is just so people can say"mines bigger than yours".

 

what needs to be determined is this: What is the amount needed for unRAID and unraid alone without ANY ADD ONS.

 

cache_dir is a mem hog is it not ?

 

not trying to be an asshole here, but now we are considering delaying a fully stable release because users who are making unraid customer require more ram ?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.