johnodon Posted December 12, 2014 Share Posted December 12, 2014 I know this is a BIG one and I have no idea if this has been considered but I know it is something I would be interested in... I would love the option of having my parity being either realtime (like unraid) or scheduled (like snapraid). The customer would have the ability to decide how they want to configure their parity. The more I become educated in the world of parity and data protection, I have come to realize that a backup scheme like snapraid may better suit my environment...vastly movies/tv shows. Any *important* data on my server is backed-up via other means. I don't think I have a true need for real-time parity but love everything else that unraid has to offer: webui, visualization, etc. Also, with a backup parity scheme, speed is not being sacrificed as it is on real-time parity. I know that one main arguments against this will he "install a cache drive". I am limited by physical bays so this is not really an option. Thoughts? John Quote Link to comment
itimpi Posted December 12, 2014 Share Posted December 12, 2014 The architecture of unRAID means you either have real-time parity or no parity at all. Quote Link to comment
johnodon Posted December 12, 2014 Author Share Posted December 12, 2014 The architecture of unRAID means you either have real-time parity or no parity at all. So v7 the architecture couldn't be reconstructed? Quote Link to comment
WeeboTech Posted December 12, 2014 Share Posted December 12, 2014 Most people are asking for dual parity. There are others asking for scheduled parity checks and partial parity checks. The offline batch parity has merit for your purposes, but probably not for others who want it real time and do not want to concern themselves with running a schedule to parity protect the drives in the array. Plus you have to figure how long it takes to run a parity sync/check now. That's probably how long it will take to do it on a schedule. A bit of hardware could help you. If the case in your sig is the one with limits, It looks as though there is room for SSD's on one of the sides. There are products which allow you to bolt in devices. I remember a few hard drive coolers that came with the template for drilling the appropriate holes (as much as that bites). Then there are the slot based mSATA cards. 4 mSATA cards and controller in 1 slot. Then there are these devices to help move some of those drives into unused space. ORICO PCI25-2S Aluminum 2 Bay 2.5-inch to PCI Internal SATA / IDE Hard Drive or SSD Mounting Bracket Adapter http://www.newegg.com/Product/Product.aspx?Item=9SIA1DS0FR8898 StarTech S25SLOTR 2.5in SATA Removable Hard Drive Bay for PC Expansion Slot http://www.newegg.com/Product/Product.aspx?Item=N82E16817998052 While I understand the reason for suggestion and it's an interesting one. I'm not sure it's feasible. The moment you write to a disk, the parity is invalidated. Parity is on a sector by sector basis. After that, consider the programming effort to satisfy this need vs the effort of dual parity which has been requested for years. A lil bit of hardware on your side may satisfy your need. If you use unRAID without a parity drive, I bet there's a way to install snapraid and use that, albeit via manual configuration. Quote Link to comment
PeterB Posted December 13, 2014 Share Posted December 13, 2014 A bit of hardware could help you. With 2+16TB in 12 bays, I would have thought that the simplest option is to invest in a couple of larger drives. Some of your drives must, currently, be less than 2TB. Merge two of them to a new 2TB, and redesignate one of the smaller drives as cache. I'm not sure it's feasible. The moment you write to a disk, the parity is invalidated. Parity is on a sector by sector basis. Just so! Quote Link to comment
sgibbers17 Posted December 13, 2014 Share Posted December 13, 2014 Using a cache drive would be similar to snapraid as your parity is written when you schedule the mover to write to the protected array. Quote Link to comment
PeterB Posted December 13, 2014 Share Posted December 13, 2014 Using a cache drive would be similar to snapraid as your parity is written when you schedule the mover to write to the protected array. I think that this was already acknowledged in the original post! Quote Link to comment
bonzi Posted December 13, 2014 Share Posted December 13, 2014 This may be a totally sacrilegious suggestion or not even possible, but would limetech consider having snapraid as an option for this purpose? Quote Link to comment
sgibbers17 Posted December 13, 2014 Share Posted December 13, 2014 Using a cache drive would be similar to snapraid as your parity is written when you schedule the mover to write to the protected array. I think that this was already acknowledged in the original post! Some how I completely missed that. Quote Link to comment
WeeboTech Posted December 13, 2014 Share Posted December 13, 2014 Using a cache drive would be similar to snapraid as your parity is written when you schedule the mover to write to the protected array. I think that this was already acknowledged in the original post! Some how I completely missed that. I know that one main arguments against this will be "install a cache drive". I am limited by physical bays so this is not really an option. I think a lil creative hardware swapping would satisfy this. As I suggested there are PCIe mSATA cards slot holders for drives. Someone suggested swapping out drives themselves. (Which is the way I've been going) a few drive swaps, sell off the old, fold in the new and I have a cost effective upgrade. A hardware swap would certainly provide a faster method towards this path then having limetech program the initial request. Quote Link to comment
mr-hexen Posted December 15, 2014 Share Posted December 15, 2014 Or this little guy could help. I plan on adding one when required in my Lian Li Q25. http://www.apricorn.com/products/desktop-ssd-hdd-upgrade-kits/vel-solox1.html Quote Link to comment
lionelhutz Posted December 15, 2014 Share Posted December 15, 2014 You can run all drives in parallel when writing which has doubled the write speed for the users who have tried it. I forget where, but the setting is there. The only downside is it must spin all disks to write. Quote Link to comment
WeeboTech Posted December 15, 2014 Share Posted December 15, 2014 Enable Turbo Write mode /root/mdcmd set md_write_method 1 Disable Turbo Write mode /root/mdcmd set md_write_method 0 This is very effective for small arrays. In my 4 drive micro server, I can write for an extended period of time at 55MB/s without turbo write. With turbo write this changes to 115MB/s. It requires all drives to be spinning and available for read. there may be diminishing returns with larger/wider arrays. This is yet to be seen/tested and documented by someone. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.