Maybe wrong spot but...separate parity from data drives.


Recommended Posts

I don't think you'd call it a feature request or a defect, but it is a design flaw.

 

I was removing a data drive that wasn't in use yet but has a issue with the disk so it can't be formatted. Pretty simple operation, stop array, remove & unassign the problem disk, set new config, re-assign data drives, start array. Parity will rebuild but not a big deal.

 

Well not paying 100% attention I put a completely full 2TB disk in the parity drive slot and started the array. As soon as I hit start I realized what I did. I had started the array back up from my cell phone so I couldn't stop the array quickly so I just cut power to the server. Total time parity sync ran? Maybe 2-3 seconds. Data drive shows up unformatted, took about 2 days of pins and needles but I recovered everything and am back in operation.

 

TL;DR?

 

Put the Parity drive slot with its own header or separate from the data drives. Or do something that very obviously differentiates it from the data drives. I searched the forum to see if others had ran into a similar problem and was amazed that it is actually a semi-common mistake.

 

Haven't booted or tested V6 yet so if this change has been made disregard this topic, but if it hasn't please do so.

Link to comment

I don't think you'd call it a feature request or a defect, but it is a design flaw.

 

I was removing a data drive that wasn't in use yet but has a issue with the disk so it can't be formatted. Pretty simple operation, stop array, remove & unassign the problem disk, set new config, re-assign data drives, start array. Parity will rebuild but not a big deal.

 

Well not paying 100% attention I put a completely full 2TB disk in the parity drive slot and started the array. As soon as I hit start I realized what I did. I had started the array back up from my cell phone so I couldn't stop the array quickly so I just cut power to the server. Total time parity sync ran? Maybe 2-3 seconds. Data drive shows up unformatted, took about 2 days of pins and needles but I recovered everything and am back in operation.

 

TL;DR?

 

Put the Parity drive slot with its own header or separate from the data drives. Or do something that very obviously differentiates it from the data drives. I searched the forum to see if others had ran into a similar problem and was amazed that it is actually a semi-common mistake.

 

Haven't booted or tested V6 yet so if this change has been made disregard this topic, but if it hasn't please do so.

 

This is not unique to V6. I did this many years ago in the 4.2 days I think.

 

I made a similar suggestion.

 

http://lime-technology.com/forum/index.php?topic=1485.msg9976#msg9976

Link to comment

This just happens too often.

 

I had a recent experience where I accidentally assigned a data disk to the parity slot (see http://lime-technology.com/forum/index.php?topic=1483.0).  Although it was totally my fault, I had some ideas that I thought Tom might consider to try and protect us from doing stupid things.

1.  On the devices page, color code or physically separate the parity disk to make it appear separate and distinct from the data disk slots.  This will make it harder to accidentally assign a data disk to the parity slot.

2.  Do not bring the array on line if the parity disk is formatted with a reiser fs.  Add a checkbox or something to allow a user to bypass this check and proceed to build parity on what was previously a data disk.

 

Perhaps limiting the parity slot to being precleared or some other type of signature.

It's too easy to make this mistake.

 

yet it might also be difficult to detect that the wrong disk is mounted.

 

On unRAID 5, I ran the following and was surprised to see my parity disk identified as ReiserFS system.

/dev/sdd is my parity disk. There must be someother way to tell.  Is there some MBR data that can be used?

 

 

root@unRAID:# blkid -p -u filesystem /dev/sd*

/dev/sda: PTTYPE="dos"

/dev/sda1: LABEL="UNRAID" UUID="9016-4EF8" VERSION="FAT32" TYPE="vfat" USAGE="filesystem"

/dev/sdb: PTTYPE="gpt"

/dev/sdb1: UUID="d1375649-cdd8-4b64-b8b9-282d9d27f768" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

/dev/sdc: PTTYPE="gpt"

/dev/sdc1: UUID="bd1bdef0-9333-418a-b2bb-e7ddc3ce8e87" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

/dev/sdd: PTTYPE="gpt"

/dev/sdd1: UUID="33450ce6-9d52-47c8-9dc0-05ca8e8a4aab" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

/dev/sde: PTTYPE="gpt"

/dev/sde1: UUID="5f69845f-c3b9-4d26-97c2-ca3ad0633344" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

Link to comment

I don't think you'd call it a feature request or a defect, but it is a design flaw.

 

I was removing a data drive that wasn't in use yet but has a issue with the disk so it can't be formatted. Pretty simple operation, stop array, remove & unassign the problem disk, set new config, re-assign data drives, start array. Parity will rebuild but not a big deal.

 

Well not paying 100% attention I put a completely full 2TB disk in the parity drive slot and started the array. As soon as I hit start I realized what I did. I had started the array back up from my cell phone so I couldn't stop the array quickly so I just cut power to the server. Total time parity sync ran? Maybe 2-3 seconds. Data drive shows up unformatted, took about 2 days of pins and needles but I recovered everything and am back in operation.

 

TL;DR?

 

Put the Parity drive slot with its own header or separate from the data drives. Or do something that very obviously differentiates it from the data drives. I searched the forum to see if others had ran into a similar problem and was amazed that it is actually a semi-common mistake.

 

Haven't booted or tested V6 yet so if this change has been made disregard this topic, but if it hasn't please do so.

 

This is not unique to V6. I did this many years ago in the 4.2 days I think.

 

I made a similar suggestion.

 

http://lime-technology.com/forum/index.php?topic=1485.msg9976#msg9976

I'm not saying it's only related to v6, but seeing as v5 is final and v6 is being worked on now, I figured it was the most appropriate place to put the suggestion.

 

Anything to help discern the parity from data would be an improvement while working on a good fool proof solution.

Link to comment

This just happens too often.

 

I had a recent experience where I accidentally assigned a data disk to the parity slot (see http://lime-technology.com/forum/index.php?topic=1483.0).  Although it was totally my fault, I had some ideas that I thought Tom might consider to try and protect us from doing stupid things.

1.  On the devices page, color code or physically separate the parity disk to make it appear separate and distinct from the data disk slots.  This will make it harder to accidentally assign a data disk to the parity slot.

2.  Do not bring the array on line if the parity disk is formatted with a reiser fs.  Add a checkbox or something to allow a user to bypass this check and proceed to build parity on what was previously a data disk.

 

Perhaps limiting the parity slot to being precleared or some other type of signature.

It's too easy to make this mistake.

 

yet it might also be difficult to detect that the wrong disk is mounted.

 

On unRAID 5, I ran the following and was surprised to see my parity disk identified as ReiserFS system.

/dev/sdd is my parity disk. There must be someother way to tell.  Is there some MBR data that can be used?

 

 

root@unRAID:# blkid -p -u filesystem /dev/sd*

/dev/sda: PTTYPE="dos"

/dev/sda1: LABEL="UNRAID" UUID="9016-4EF8" VERSION="FAT32" TYPE="vfat" USAGE="filesystem"

/dev/sdb: PTTYPE="gpt"

/dev/sdb1: UUID="d1375649-cdd8-4b64-b8b9-282d9d27f768" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

/dev/sdc: PTTYPE="gpt"

/dev/sdc1: UUID="bd1bdef0-9333-418a-b2bb-e7ddc3ce8e87" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

/dev/sdd: PTTYPE="gpt"

/dev/sdd1: UUID="33450ce6-9d52-47c8-9dc0-05ca8e8a4aab" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

/dev/sde: PTTYPE="gpt"

/dev/sde1: UUID="5f69845f-c3b9-4d26-97c2-ca3ad0633344" VERSION="3.6" TYPE="reiserfs" USAGE="filesystem"

 

Correct me if I'm wrong, but the parity drive being identified as reiserfs is a condition that will happen anytime there is an odd number of data drives. The sectors of the disk that hold the filesystem information would be the same so with "even" parity it would always work out that parity will reflect a reiserfs filesystem.

 

Lets say:

Reiserfs filesystems first bits are :1 0 1 1 0 1

 

Each drive will have the same thing so given 3 data drives parity calc will look like:

 

Disk1 - 1 0 1 1 0 1 

Disk2 - 1 0 1 1 0 1

Disk3 - 1 0 1 1 0 1

Parity - 1 0 1 1 0 1

Equals- 4 0 4 4 0 4 (all even)

 

May be obvious to everyone else but figured I'd explain it just in case.

 

a good fool proof solution.

There is no such thing. There will always be a fool that can get around any protection. Best you can do is make it so the fool has to work at it to screw it up.

 

No, there is no such thing. But I'd like to think I'm no fool and the current setup got me so anything would be better than what we've got. Granted new config isn't exactly the most common operation it does leave the door open for potentially devastating data loss.

 

We all know unraid isn't a replacement for backups. That being said, we all use unraid and its parity protection to help avoid data loss. The parity drive selection being with the data drives works against that.

Link to comment

The parity protection only acts on the first partition and not the whole drive. To rebuild a data drive first requires creating a valid partition and then reconstructing the data onto the partition.

 

But, I'm not sure how the data gets written on the parity. Maybe a partition is created and then the parity data is written to the partition?

Link to comment

The parity protection only acts on the first partition and not the whole drive. To rebuild a data drive first requires creating a valid partition and then reconstructing the data onto the partition.

 

But, I'm not sure how the data gets written on the parity. Maybe a partition is created and then the parity data is written to the partition?

 

Due to unraids clear and format the first partition is the whole drive (with the exception of the cache disk).

 

And you are right, the drive must have a valid partition to be rebuilt, and that data exists on the MBR which is NOT calculated by parity. But the filesystem info resides on the start of the disk and it is included in a parity calculation.

Link to comment

Consider that an UnRAID 2 drive system, one parity and one data drive, is essentially a mirror of each other, so even a test like test-mounting a Reiser file system on the parity drive would succeed.  There is no good way of guaranteeing an identification of what is parity and what is not, unless you configure something out of the parity-controlled area.

 

I'd like to suggest using the partition file system type (here's one table of them) to differentiate between the parity drive and all other drives.  We currently use type 83 for our Linux file systems and for the parity drive.  Why not change the Parity drive to be a currently unused one, like 89 or 8f?

 

The procedure would be -

* a future UnRAID release would detect that the Parity drive was type 83 on array start of a normal operational array, and automatically change it to 8f (used as an example), no user notification needed

* if it is not a normal array start, that is, a parity build is required, then the partition type is checked and:

** if it is 8f, operation proceeds as normal, parity is built

** if it is not 8f, then verification is required ("Drive may contain data, will be destroyed", Proceed or Cancel); a ReiserFS/XFS/BTRFS/etc mount could be attempted, and if successful, report that

** if they proceed with parity build, then parity drive is marked with 8f and parity is built

Link to comment

I think that would be a good option. Outside of the parity calculations it would not be affected by something like weebos reiserfs fluke (which is something I'd like to test,  put 3 drives in and build parity, then try to mount parity as reiserfs and see if it succeeds).

 

Tom, what do you think? Is it doable and a viable option?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.