[FEATURE REQUEST] - Option of running either realtime or backup parity


Recommended Posts

I know this is a BIG one and I have no idea if this has been considered but I know it is something I would be interested in...

 

I would love the option of having my parity being either realtime (like unraid) or scheduled (like snapraid).  The customer would have the ability to decide how they want to configure their parity.

 

The more I become educated in the world of parity and data protection, I have come to realize that a backup scheme like snapraid may better suit my environment...vastly movies/tv shows.  Any *important* data on my server is backed-up via other means.  I don't think I have a true need for real-time parity but love everything else that unraid has to offer:  webui, visualization, etc.  Also, with a backup parity scheme, speed is not being sacrificed as it is on real-time parity.

 

I know that one main arguments against this will he "install a cache drive".  I am limited by physical bays so this is not really an option.

 

Thoughts?

 

John

Link to comment

Most people are asking for dual parity.

 

There are others asking for scheduled parity checks and partial parity checks.

 

The offline batch parity has merit for your purposes, but probably not for others who want it real time and do not want to concern themselves with running a schedule to parity protect the drives in the array.

 

Plus you have to figure how long it takes to run a parity sync/check now.

That's probably how long it will take to do it on a schedule.

 

A bit of hardware could help you. If the case in your sig is the one with limits, It looks as though there is room for SSD's on one of the sides.

There are products which allow you to bolt in devices. I remember a few hard drive coolers that came with the template for drilling the appropriate holes (as much as that bites).

 

Then there are the slot based mSATA cards.  4 mSATA cards and controller in 1 slot.

 

Then there are these devices to help move some of those drives into unused space.

 

ORICO PCI25-2S Aluminum 2 Bay 2.5-inch to PCI Internal SATA / IDE Hard Drive or SSD Mounting Bracket Adapter

http://www.newegg.com/Product/Product.aspx?Item=9SIA1DS0FR8898

 

StarTech S25SLOTR 2.5in SATA Removable Hard Drive Bay for PC Expansion Slot

http://www.newegg.com/Product/Product.aspx?Item=N82E16817998052

 

While I understand the reason for suggestion and it's an interesting one.

 

I'm not sure it's feasible. The moment you write to a disk, the parity is invalidated.

Parity is on a sector by sector basis.

 

After that, consider the programming effort to satisfy this need vs the effort of dual parity which has been requested for years.

 

A lil bit of hardware on your side may satisfy your need.

 

If you use unRAID without a parity drive, I bet there's a way to install snapraid and use that, albeit via manual configuration.

Link to comment
A bit of hardware could help you.

 

With 2+16TB in 12 bays, I would have thought that the simplest option is to invest in a couple of larger drives.  Some of your drives must, currently, be less than 2TB.  Merge two of them to a new 2TB, and redesignate one of the smaller drives as cache.

 

 

I'm not sure it's feasible. The moment you write to a disk, the parity is invalidated.

Parity is on a sector by sector basis.

 

Just so!

Link to comment

Using a cache drive would be similar to snapraid as your parity is written when you schedule the mover to write to the protected array.

 

I think that this was already acknowledged in the original post!

Some how I completely missed that.

 

 

I know that one main arguments against this will be "install a cache drive".  I am limited by physical bays so this is not really an option.

 

I think a lil creative hardware swapping would satisfy this. As I suggested there are PCIe mSATA cards slot holders for drives.

Someone suggested swapping out drives themselves. (Which is the way I've been going) a few drive swaps, sell off the old,  fold in the new and I have a cost effective upgrade.

 

A hardware swap would certainly provide a faster method towards this path then having limetech program the initial request.

 

 

 

Link to comment

Enable Turbo Write mode

/root/mdcmd set md_write_method 1

 

Disable Turbo Write mode

/root/mdcmd set md_write_method 0

 

This is very effective for small arrays.

In my 4 drive micro server, I can write for an extended period of time at 55MB/s without turbo write.

With turbo write this changes to 115MB/s.

 

It requires all drives to be spinning and available for read.

there may be diminishing returns with larger/wider arrays.

This is yet to be seen/tested and documented by someone.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.