unRAID Server Release 5.0-beta6a Available


Recommended Posts

The first 4 bytes which are different correspond to the MBR "Disk Signature" section at location 440. Supposedly the Linux kernel version 2.6 and later can (and does) make use of this signature at boot time to determine the location of the boot volume. As for the next two bytes at location 444, the only thing I can find is comments on how they're usually NULL.

 

I'm making the assumption that LILO does indeed make use of the Disk Signature and the 2 bytes after, locations 440-446, to determine which device maps to /boot and /.

 

Here's the first trace of this feature I could find: http://lkml.org/lkml/2003/12/19/139

 

*EDIT*: It is definitely LILO that's the culprit. The last 2 bytes at location 444 is "CF C9" which is LILO's magic signature. The previous bytes are used as serial id's for each individual drive.

 

#define MAX_BOOT_SIZE   0x1b6   /* (leave some space for NT's and DR DOS' dirty
                                  hacks) scream if the boot sector gets any
                                  bigger -- (22.5 - we now use those hacks) */

#define MAGIC_SERIAL    0xC9CF  /* LILO installed serial number */

 

 

Link to comment
  • Replies 349
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Not wanting to sidetrack but has anyone tried any of the 5.0 betas with a AOC-USAS2-L8e card? Results?

 

I'm trying it in a test box and whenever I try to add a drive connected to that card to the array everything just locks up and I have to physically restart the box. Everything is just fine (temp is reading correctly for drives on the card, etc) until I try to start the array with a drive on the card.

 

If no one is using this card then any recommendations on a comparable card (same interface, etc) would be much appreciated.

Link to comment

I also don't want to hijack the discussion but I have a similar question.

 

Were device drivers removed when you move forward from version 4 to version 5? Here is the detailed detailed description of what I mean:

http://lime-technology.com/forum/index.php?topic=11018.0

 

Btw. some feedback: installed latest beta without any hassle. Even unMenu with all additions is running well plus Twonkyserver and Mysql. I do not have a cache drive installed.

Link to comment

I also don't want to hijack the discussion but I have a similar question.

 

Were device drivers removed when you move forward from version 4 to version 5? Here is the detailed detailed description of what I mean:

http://lime-technology.com/forum/index.php?topic=11018.0

 

Btw. some feedback: installed latest beta without any hassle. Even unMenu with all additions is running well plus Twonkyserver and Mysql. I do not have a cache drive installed.

 

+1

also want to know if this card works with the latest beta ???

Link to comment

I installed 5.0 beta 6a on a new server, precleared 9 drives and made a 9 drive array, no parity drive at this point. Everything fine to this point I was able to see the drives from my pc and r/w no issues. Set up a share called Movies, high water/ split level 1/ included disk (blank)/ excluded disk (blank). Share shows up correctly on each drive and I am able to access as Movies with no issue. Installed 3 more drives yesterday, precleared with no issue, stopped array and assigned the 3 drives to disk10 disk11 and disk12. Started the array again everything shows perfect, mbr all ok on all 12 drives. I go into User Shares and I click on Movies. Go to included disks and type disk1,disk2,disk3...disk11,disk12, click apply, then done. My expectation is that the Movie share will now also be on the 3 new disks 10,11,12 but it does not appear on those drives. I can access the 3 drives through their individual network shares with no issues but the expected Movies folder is not on those drives, when I write to the Movies share it only writes to the first 9 drives. Tried going back into shares and putting the included disks to (blank) but the problem persists. Am I doing something wrong or is this a bug in 6A.

Link to comment

I installed 5.0 beta 6a on a new server, precleared 9 drives and made a 9 drive array, no parity drive at this point. Everything fine to this point I was able to see the drives from my pc and r/w no issues. Set up a share called Movies, high water/ split level 1/ included disk (blank)/ excluded disk (blank). Share shows up correctly on each drive and I am able to access as Movies with no issue. Installed 3 more drives yesterday, precleared with no issue, stopped array and assigned the 3 drives to disk10 disk11 and disk12. Started the array again everything shows perfect, mbr all ok on all 12 drives. I go into User Shares and I click on Movies. Go to included disks and type disk1,disk2,disk3...disk11,disk12, click apply, then done. My expectation is that the Movie share will now also be on the 3 new disks 10,11,12 but it does not appear on those drives. I can access the 3 drives through their individual network shares with no issues but the expected Movies folder is not on those drives, when I write to the Movies share it only writes to the first 9 drives. Tried going back into shares and putting the included disks to (blank) but the problem persists. Am I doing something wrong or is this a bug in 6A.

The directories on user-share disks are not created until they are needed.  That timing will depend on the allocation method you've elected .  You've done nothing wrong.  Your expectation that the directory is created when you "include" a disk as part of a user-share is in error. 

 

Joe L.

Link to comment

There is a problem with the spindown in 5.0b6a.

 

I have my default settings to spin down every 5 hours, and then my per disk settings of SOME of the drives (on the BR10i controller) are set to Never.  But unRAID is never spinning down any of the disks.  Spinup groups are also disabled.  I rebooted and waited overnight and drives are all still spun up.

 

There are no spindown messages in the syslog indicating that it is trying and failing.  I am running on the C2SEE-O motherboard.

 

Screenshots of the default disk settings, a disk on the motherboard settings, and a disk on the BR10i settings are provided.

5.b66a_Disk_Settings.JPG.a170d00bafbc8694ecfe03d90850f4f9.JPG

MB_Disk_Setting.JPG.a66636657b5bd20cb42273dbdc1bf487.JPG

BR10i_Settings.JPG.526cc298d032a3bd913e935dfd35d1be.JPG

Link to comment

I just did a wholesale rearrangement of my disks and what controllers they were connected to.  When I booted, all disks were correctly assigned to the risk slots and the array started.  With prior versions I'd have been on the devices page, unassigning and reassigning disks for 5 minutes to get everything assigned right.  This is a great enhancement.  Thanks Tom!

Link to comment

Thought I would post in the spirit of beta testing.  This is showing up in the syslog during boot (see two bolded lines and then another bolded line later).  No parity check was requested and no parity check is running.  If this is by design, no problem.  Just thought I'd point it out in case it was a problem.

 

...

Mar 12 10:32:45 Shark emhttp: shcmd (22): mkdir /mnt/disk9 (Routine)

Mar 12 10:32:45 Shark emhttp: shcmd (23): mkdir /mnt/disk6 (Routine)

Mar 12 10:32:45 Shark emhttp: shcmd (24): mkdir /mnt/disk5 (Routine)

Mar 12 10:32:45 Shark emhttp: shcmd (25): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md1 /mnt/disk1 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark emhttp: shcmd (26): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md3 /mnt/disk3 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark emhttp: shcmd (28): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md9 /mnt/disk9 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark emhttp: shcmd (29): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md4 /mnt/disk4 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark emhttp: shcmd (27): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md2 /mnt/disk2 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark emhttp: shcmd (30): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md8 /mnt/disk8 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark emhttp: shcmd (31): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md6 /mnt/disk6 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark emhttp: shcmd (32): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md5 /mnt/disk5 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark kernel: mdcmd (46): check NOCORRECT (unRAID engine)

Mar 12 10:32:45 Shark kernel: md: recovery thread woken up ... (unRAID engine)

Mar 12 10:32:45 Shark emhttp: shcmd (33): set -o pipefail ; mount -t reiserfs -o noatime,nodiratime /dev/md7 /mnt/disk7 2>$stuff$1 |logger (Other emhttp)

Mar 12 10:32:45 Shark kernel: REISERFS (device md9): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md9): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md3): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md3): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md1): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md1): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md2): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md2): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md6): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md6): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md8): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md8): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md4): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md4): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md5): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md5): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md7): found reiserfs format "3.6" with standard journal (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md7): using ordered data mode (Routine)

Mar 12 10:32:45 Shark kernel: md: recovery thread has nothing to resync (unRAID engine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md9): journal params: device md9, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md9): checking transaction log (md9) (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md3): journal params: device md3, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 (Routine)

Mar 12 10:32:45 Shark kernel: REISERFS (device md3): checking transaction log (md3) (Routine)

...

Link to comment

There is a problem with the spindown in 5.0b6a.

 

I have my default settings to spin down every 5 hours, and then my per disk settings of SOME of the drives (on the BR10i controller) are set to Never.  But unRAID is never spinning down any of the disks.  Spinup groups are also disabled.  I rebooted and waited overnight and drives are all still spun up.

 

There are no spindown messages in the syslog indicating that it is trying and failing.  I am running on the C2SEE-O motherboard.

 

Screenshots of the default disk settings, a disk on the motherboard settings, and a disk on the BR10i settings are provided.

 

Just want to mention that my disks spin down as usual ...

even my cache down spins down when i put off sab....

just an extract of my logs

 

Mar 12 20:30:05 p5bplus kernel: mdcmd (1123): spindown 3 (Routine)
Mar 12 20:30:06 p5bplus kernel: mdcmd (1124): spindown 4 (Routine)
Mar 12 20:30:06 p5bplus kernel: mdcmd (1125): spindown 6 (Routine)
Mar 12 20:30:07 p5bplus kernel: mdcmd (1126): spindown 7 (Routine)
Mar 12 20:30:08 p5bplus kernel: mdcmd (1127): spindown 9 (Routine)
Mar 12 20:30:08 p5bplus kernel: mdcmd (1128): spindown 10 (Routine)
Mar 12 20:30:09 p5bplus kernel: mdcmd (1129): spindown 12 (Routine)
Mar 12 20:30:10 p5bplus kernel: mdcmd (1130): spindown 13 (Routine)
Mar 12 20:30:21 p5bplus kernel: mdcmd (1131): spindown 8 (Routine)
Mar 12 20:36:13 p5bplus kernel: mdcmd (1132): spindown 5 (Routine)
Mar 12 21:16:11 p5bplus kernel: mdcmd (1133): spindown 2 (Routine)
Mar 12 21:57:08 p5bplus kernel: mdcmd (1134): spindown 1 (Routine)
Mar 12 22:09:31 p5bplus kernel: mdcmd (1135): spindown 4 (Routine)
Mar 12 22:09:42 p5bplus kernel: mdcmd (1136): spindown 5 (Routine)
Mar 12 22:34:45 p5bplus kernel: mdcmd (1137): spindown 2 (Routine)
Mar 12 22:35:05 p5bplus kernel: mdcmd (1138): spindown 5 (Routine)
Mar 12 22:35:06 p5bplus kernel: mdcmd (1139): spindown 12 (Routine)
Mar 12 23:21:25 p5bplus kernel: mdcmd (1140): spindown 6 (Routine)

 

Link to comment

There is a problem with the spindown in 5.0b6a.

 

I have my default settings to spin down every 5 hours, and then my per disk settings of SOME of the drives (on the BR10i controller) are set to Never.  But unRAID is never spinning down any of the disks.  Spinup groups are also disabled.  I rebooted and waited overnight and drives are all still spun up.

 

That's sort of odd. I have the 2 drives on the BR10i set to "NEVER", and the other 4 drives on the motherboard set to "use default", and the default spindown set to "45 minutes" and the 4 drives on the motherboard do indeed spindown after being in use. I have spinup groups enabled with the setting for each drive being blank so unRAID will determine the group itself.

 

Here you can see it spinning down the parity drive and the drive that was being written to properly.

 

Mar  9 17:42:40 reaver kernel: mdcmd (39): spindown 0
Mar  9 17:44:31 reaver kernel: mdcmd (40): spindown 1
Mar  9 17:45:12 reaver kernel: mdcmd (41): spindown 3
Mar  9 18:39:32 reaver kernel: mdcmd (42): spindown 0
Mar  9 18:39:32 reaver kernel: mdcmd (43): spindown 2
Mar 11 20:11:47 reaver kernel: mdcmd (44): spindown 0
Mar 11 20:11:48 reaver kernel: mdcmd (45): spindown 3
Mar 11 22:27:05 reaver kernel: mdcmd (46): spindown 0
Mar 11 22:27:05 reaver kernel: mdcmd (47): spindown 3
Mar 12 01:19:43 reaver kernel: mdcmd (49): spindown 0
Mar 12 02:23:58 reaver kernel: mdcmd (50): spindown 1

Link to comment

OK...so I decided to go the cache drive route and the drive arrives in the mail today.  With the recent discussions about cache drive issues with v5b6a, can I just put this thing in and expect it to work out of the box (SEAGATE ST31000528AS)?  Were the issues encountered only with an existing cache drive that was previously formatted?

 

TIA,

 

John

Link to comment

OK...so I decided to go the cache drive route and the drive arrives in the mail today.  With the recent discussions about cache drive issues with v5b6a, can I just put this thing in and expect it to work out of the box (SEAGATE ST31000528AS)?  Were the issues encountered only with an existing cache drive that was previously formatted?

 

TIA,

 

John

The issues were only when the cache drive was formatted in a non-standard way by power-users that had created multiple partitions on it.
Link to comment

I've experienced a major problem upgrading from 5.0b2 to 5.0b6a

 

In my setup there are 8 disks (including one parity)

 

The disks are as follows:

 

parity SAMSUNG_HD204UI (Adv. format)

disk1 ST31500341AS

disk2 ST31500341AS

disk3 ST31500341AS

disk4 ST31500341AS

disk5 WDC_WD15EARS (Adv. format - jumper set)

disk6 WDC_WD15EARS (Adv. format - jumper set)

disk7 WDC_WD15EARS (Adv. format - jumper set)

 

After upgrading unRAID said that the disk configuration was valid, parity in sync and no MBR unknown's or errors), so I went ahead and clicked "Start Array"..

 

Disk 2 through 6 then showed "Unformatted" :(

 

Checking the MBR of the disks, unRAID claims:

 

parity MBR: unaligned

disk1 MBR: unaligned

disk2 MBR: 4K-aligned

disk3 MBR: 4K-aligned

disk4 MBR: 4K-aligned

disk5 MBR: 4K-aligned

disk6 MBR: 4K-aligned

disk7 MBR: unaligned

 

So apparently it's all the disks that unRAID believes are 4K alligned, that are seen as Unformatted, even though disk 2,3 and 4 are not adv. format capable..

 

Now to the serious issue.. Obviously, I didn't check the "Format" button as I knew there were about 8TB of data on the 6 disks, so I went ahead and reloaded my backup - discarding everything related to 5.0b6a.

 

However, when going back to 5.0b2, the 6 disks still show "Unformatted" and aren't recognized by unRAID :(((

 

Where to go from here?.. Can the MBR's be fixed and if so, how?

 

All disks were originally formatted by unRAID 5.0b2 (coming from NTFS)..

Link to comment

I've experienced a major problem upgrading from 5.0b2 to 5.0b6

 

In my setup there are 8 disks (including one parity)

 

The disks are as follows:

 

parity SAMSUNG_HD204UI (Adv. format)

disk1 ST31500341AS

disk2 ST31500341AS

disk3 ST31500341AS

disk4 ST31500341AS

disk5 WDC_WD15EARS (Adv. format - jumper set)

disk6 WDC_WD15EARS (Adv. format - jumper set)

disk7 WDC_WD15EARS (Adv. format - jumper set)

 

After upgrading unRAID said that the disk configuration was valid, parity in sync and no MBR unknown's or errors), so I went ahead and clicked "Start Array"..

 

Disk 2 through 6 then showed "Unformatted" :(

 

Checking the MBR of the disks, unRAID claims:

 

parity MBR: unaligned

disk1 MBR: unaligned

disk2 MBR: 4K-aligned

disk3 MBR: 4K-aligned

disk4 MBR: 4K-aligned

disk5 MBR: 4K-aligned

disk6 MBR: 4K-aligned

disk7 MBR: unaligned

 

So apparently it's all the disks that unRAID believes are 4K alligned, that are seen as Unformatted, even though disk 2,3 and 4 are not adv. format capable..

 

Now to the serious issue.. Obviously, I didn't check the "Format" button as I knew there were about 8TB of data on the 6 disks, so I went ahead and reloaded my backup - discarding everything related to 5.0b6.

 

However, when going back to 5.0b2, the 6 disks still show "Unformatted" and aren't recognized by unRAID :(((

 

Where to go from here?.. Can the MBR's be fixed and if so, how?

 

All disks were originally formatted by unRAID 5.0b2 (coming from NTFS)..

Yes, you can fix the MBRs.

 

Good that you did not click the "format" button. That would have complicated the process of getting to your files.

 

Did you know there was a 5.0beta6a release?  It was created just so lime-tech could try to figure out what is happening.  Unfortunately, at this time you've already overwritten the original MBRs, so the detail he might have found helpful is no longer there.

 

You can download the utility I wrote to fix the MBR, or the one that lime-tech made available.  Either will fix the MBRs on the disks showing as un-formatted.

You'll need to run it on each of the "unformatted" disks in turn.

 

Joe L.

Link to comment

Yes, you can fix the MBRs.

 

Good that you did not click the "format" button. That would have complicated the process of getting to your files.

 

Did you know there was a 5.0beta6a release?   It was created just so lime-tech could try to figure out what is happening.   Unfortunately, at this time you've already overwritten the original MBRs, so the detail he might have found helpful is no longer there.

 

You can download the utility I wrote to fix the MBR, or the one that lime-tech made available.  Either will fix the MBRs on the disks showing as un-formatted.

You'll need to run it on each of the "unformatted" disks in turn.

 

Joe L.

 

Apparently I made a small typo.. It was of course 6a (downloaded today) and not 6 I tried to upgrade into..

 

Am I correct in assuming that I should run the: mkmbr /dev/sdc 63 0x83  => normal MBR, partition 1 starts in sector 63

 

.. on all the Unformatted disks?

 

If there's any information regarding my setup/disks that can help shed some light on what goes wrong, I'll be happy to provide it..

Link to comment

yes, that will fix them. 

 

You've probably already over-written whatever it might have detected.

 

You might run the two commands that Tom requests be run regardless, just in case there is some remaining evidence.  Do this before you "fix" the MBRs.

 

Are you running a gigabyte motherboard, were your disks attached to one?  Are you using a non-standard boot loader?

What motherboard/bios are you running?  Describe your array, its hardware, and its history.

 

 

Link to comment

yes, that will fix them.  

 

You've probably already over-written whatever it might have detected.

 

You might run the two commands that Tom requests be run regardless, just in case there is some remaining evidence.  Do this before you "fix" the MBRs.

 

Are you running a gigabyte motherboard, were your disks attached to one?  Are you using a non-standard boot loader?

What motherboard/bios are you running?  Describe your array, its hardware, and its history.

 

 

 

My array consists of the following:

Motherboard: ASUS P5Q-EM / ICH10R / BIOS v.02.61
CPU: Intel(R) Pentium(R) Dual  CPU  E2180  @ 2.00GHz stepping 0d
Extra PCIex4 Contr.: Adaptec 1430SA / BIOS v. 6.0-0 / All disk added as JBOD / BIOS disabled

Following disks connected to the onboard ICH10R controller:
parity (sdf)
disk2 (sdi)
disk3 (sdg)
disk4 (sdh)

Following disks connected to the internal Adaptec 1430SA controller:
disk1 (sda)
disk5 (sdb)
disk6 (sde)
disk7 (sdd)

 

Boot loader is the default one that comes with unRAID 5.0b2

 

The history of the array is that it's been gradually expanded as a migration phase from Windows Home Server. So it started with one disk, and was expanded to 8 as I painfully copied 1TB data from WHS to an external disk, booted unRAID, added the newly emptied disk and copied the data back - repeated that 8 times.. So all disks were NTFS before and have been formatted by the unRAID webgui..

 

The results of the two commands:

root@Server:~# cat /sys/block/sda/size
2930277168
root@Server:~# dd status=noxfer count=1 if=/dev/sda | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 3f 00 00 00 f1 7a a8 ae 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512


root@Server:~# cat /sys/block/sdb/size
2930277168
root@Server:~# dd status=noxfer count=1 if=/dev/sdb | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 40 00 00 00 f0 7a a8 ae 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512

root@Server:~# cat /sys/block/sdd/size
2930277168
root@Server:~# dd status=noxfer count=1 if=/dev/sdd | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 3f 00 00 00 f1 7a a8 ae 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512

root@Server:~# cat /sys/block/sde/size
2930277168
root@Server:~# dd status=noxfer count=1 if=/dev/sde | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 40 00 00 00 f0 7a a8 ae 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512

root@Server:~# cat /sys/block/sdf/size
3907029168
root@Server:~# dd status=noxfer count=1 if=/dev/sdf | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 3f 00 00 00 71 88 e0 e8 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512

root@Server:~# cat /sys/block/sdg/size
2930277168
root@Server:~# dd status=noxfer count=1 if=/dev/sdg | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 40 00 00 00 f0 7a a8 ae 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512

root@Server:~# cat /sys/block/sdh/size
2930277168
root@Server:~# dd status=noxfer count=1 if=/dev/sdh | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 40 00 00 00 f0 7a a8 ae 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512

root@Server:~# cat /sys/block/sdi/size
2930277168
root@Server:~# dd status=noxfer count=1 if=/dev/sdi | od -Ad -t x1
1+0 records in
1+0 records out
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000448 00 00 83 00 00 00 40 00 00 00 f0 7a a8 ae 00 00
0000464 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000496 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa
0000512

 

The disks being misread/unformatted are:

disk2 ST31500341AS (sdi)

disk3 ST31500341AS (sdg)

disk4 ST31500341AS (sdh)

disk5 WDC_WD15EARS (sdb)

disk6 WDC_WD15EARS (sde)

 

Unfortunatly, even though I did try to copy the logfiles before attempting to roll-back to 5.0b2 (in the event that they would prove helpful), I've checked and they contain no data from the upgrade today (must have copied the wrong files) - yay, sorry for being a dumbass..

 

Updated: Just ran:

root@Server:~# mkmbr /dev/sdi 63 0x83

re-reading partition table of disk /dev/sdi

root@Server:~# mkmbr /dev/sdg 63 0x83

re-reading partition table of disk /dev/sdg

root@Server:~# mkmbr /dev/sdh 63 0x83

re-reading partition table of disk /dev/sdh

root@Server:~# mkmbr /dev/sdb 63 0x83

re-reading partition table of disk /dev/sdb

root@Server:~# mkmbr /dev/sde 63 0x83

re-reading partition table of disk /dev/sde

 

And all disk and contents appear fine now.. *giant sigh of relief*

Link to comment

even though disk 2,3 and 4 are not adv. format capable..

 

FYI, there is no such thing (at least yet) as a disk that is not advanced format capable. The 4k-aligned simply means the partition starts on sector 64 instead of sector 63. Every disk you have can use either alignment. The advanced format drives, except for a jumpered EARS, just work better when the partition starts on sector 64.

 

Peter

 

Link to comment
After upgrading unRAID said that the disk configuration was valid, parity in sync and no MBR unknown's or errors), so I went ahead and clicked "Start Array"..

 

Disk 2 through 6 then showed "Unformatted"

 

This is puzzling and frustrating.  I'm sure that you experienced precisely the condition which Tom is trying to catch.  Unfortunately, contrary to expectation, the MBR was not 'unknown' prior to the system being started.  However, once started, unRAID decided that it should create a new partition on some of your drives.  This, in itself, may provide a valuable clue.

 

It suggests that either:

1) The test Tom has applied when reporting the MBR before array start is not the same as the test applied when deciding whether the disk is correctly partitioned for unRAID.

 

While this is possible, I don't believe that Tom would get it wrong.

 

2) As the array starts, the (correct) MBR is becoming corrupted between being read from disk and the test being applied.

 

I believe that this is the more likely situation - however, it could be very difficult to track down the source of the corruption.  I can't think of a sensible explanation why this corruption should occur only on the first array start after system upgrade - perhaps related to incomplete, or missing, configuration files.  I don't envy Tom!

Link to comment

I just did a wholesale rearrangement of my disks and what controllers they were connected to.  When I booted, all disks were correctly assigned to the risk slots and the array started.  With prior versions I'd have been on the devices page, unassigning and reassigning disks for 5 minutes to get everything assigned right.  This is a great enhancement.  Thanks Tom!

 

I have to second this... Great job on that

also the semi hot swap work well .. just stop array ...open door.... change disk .. wait a minute.. refresh page and the disk is there ... HANDY!!

 

upgraded my parity to a F2 Samsung 2TB

hung a 2 port multiplicator enclosure (one i had already) on my esata port and added there my old parity drive and a refurbished 1 TB EARS that i got back from RMA

rebuilding parity now ... already running preclear on the old parity drive...

also mounted the 500 Gb disks now from USB and hey they even show up in Unmenu :)

moving files with MC now

GUess i need to update my signature now  ???

 

Link to comment

This is puzzling and frustrating.  I'm sure that you experienced precisely the condition which Tom is trying to catch.  Unfortunately, contrary to expectation, the MBR was not 'unknown' prior to the system being started.  However, once started, unRAID decided that it should create a new partition on some of your drives.  This, in itself, may provide a valuable clue.

 

To be honest, I wont with absolute certainty guarantee that there wasn't any "MBR: Unknown"'s.. I was preoccupied with verifying that all S/Ns matched the ones from my documentation, and that all disks were assigned to the correct slots.. Then I looked for anything out of the ordinary - error messages and such..

 

It is possible that I overlooked if any of the disks said "MBR: Unknown" as I didn't realize the significance of looking for that message..  :-[

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.