unRAID Project Roadmap Announcements


Recommended Posts

... say you are using Raid 1 or Raid 5, you are going to spin up multiple disks for 1 movie again...

 

If you used BTRFS to create a RAID1 or RAID5 array, those drives in the specific BTRFS array would be required to be spinning.

 

 

Therefore if you create a Cache Disk Pool using RAID1 or some other raid, Drives in that array set will need to be spinning.

 

So far from my understanding, Individual drives in the unRAID array will not require spinning up drives that do not contain the data being accessed. Writes will always require drive + parity.

 

 

Link to comment
  • Replies 413
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

...say you are using Raid 1 or Raid 5, you are going to spin up multiple disks for 1 movie again

 

But, why would you implement Raid 1 or Raid 5 on top of unRAID?  Why not just rely on unRAID to continue doing the great job it always has, but just with a new/better file system?

 

 

For the cache drive/pool.

Link to comment

...say you are using Raid 1 or Raid 5, you are going to spin up multiple disks for 1 movie again

 

But, why would you implement Raid 1 or Raid 5 on top of unRAID?  Why not just rely on unRAID to continue doing the great job it always has, but just with a new/better file system?

 

For the cache drive/pool.

 

But movies won't be stored on the cache pool, so the answer to the original question about raid and btrfs, is no, watching a movie will not spin up all drives.  Not unless it had just been downloaded, and even then, it will only spin up the cache pool, which is likely to be running anyway, if one is a habitual downloader.

Link to comment

I have never understood the obession with spun down drives on here. There are countless studies that show spinning down your drives does not extend the life.

 

As far as power consumption, a 7200 spinner is around 10 - 25 watts (depending on your drive). My nightlights throughout my house use more power than my array. On top of that RAID drives can sleep when not in use so in the middle of the night and during the day while at work / school they aren't going to be spun up. We aren't going to be using RAID within BTRFS for all our unRAID drives anyway. Its not practical (I explain a few pages back why) and defeats the purpose of unraid.

 

For most of us, a 12 pack of beer is what it will cost a year. Unless you have something like 90+ drives and run them all 24 / 7.

 

I drive a car that gets 15 MPH and I don't recycle my bath / toilet water. If I have a bunch of VMs that are important to me and I put them in RAID 1 "cache drive pool" I could care less about a few bucks a year or if I am killing polar bears.

Link to comment

These are all very welcome features! Being able to easily add applications with Docker will make me that much happier having moved from WHS to unraid.

 

I have never understood the obession with spun down drives on here. There are countless studies that show spinning down your drives does not extend the life.

 

I have 10 disks in my server. The biggest benefit I see from having them spin down is heat. I live in a tiny condo, so the server has to be in my room. A room with more than one computer running is absolutely killer in the summer.

Link to comment

 

I have never understood the obession with spun down drives on here. There are countless studies that show spinning down your drives does not extend the life.

 

The biggest benefit I see from having them spin down is heat.

 

Same with me.  It warms the room up.  In the winter I don't mind.  During the summer it is a different issue.

Link to comment

Another comment on spun down drive benefits beyond power and heat is noise.  Newer drives I think this isn't a big deal but older drives were much louder.  Some folks (like customers of our AVS 10/4) want as little noise from their system as possible.

 

Another comment on why someone would want btrfs on their array would be solely for copy on write benefits.  We've been discussing internally the best ways to leverage BTRFS withing unRAID, but ultimately, we think most folks will probably use XFS for their large content storing disks (movies and such).  Btrfs biggest strength in unRAID will.be cache pooling.

Link to comment

http://en.wikipedia.org/wiki/XFS

 

Wow, you're right.  XFS sounds perfect for unraid

 

XFS excels in parallel input/output (I/O) operations due to its design, which is based on allocation groups (a type of subdivision of the physical volumes in which XFS is used- also shortened to AGs). Because of this, XFS enables extreme scalability of I/O threads, file system bandwidth, and size of files and of the file system itself when spanning multiple physical storage devices.

 

XFS ensures the consistency of data by employing metadata journaling and supporting write barriers. Space allocation is performed via extents with data structures stored in B+ trees, improving the overall performance of the file system, especially when handling large files. Delayed allocation assists in the prevention of file system fragmentation; online defragmentation is also supported. A feature unique to XFS is the pre-allocation of I/O bandwidth at a pre-determined rate, which is suitable for many real-time applications; however, this feature was supported only on IRIX, and only with specialized hardware.

 

A notable XFS user, NASA Advanced Supercomputing Division, takes advantage of these capabilities deploying two 300+ terabyte XFS filesystems on two SGI Altix archival storage servers, each of which is directly attached to multiple Fibre Channel disk arrays.[11]

 

 

Link to comment

I could see people using both.

 

BTRFS points that interest me

 

Background scrub process for finding and fixing errors on files with redundant copies

 

Checksums on data and metadata (crc32c)

Compression (zlib and LZO)

Batch, or out-of-band deduplication (happens after writes, not during)

 

Additional features in development, or planned, include:

 

Object-level mirroring and striping

Online filesystem check

Hot data tracking and moving to faster devices (currently being pushed as a generic feature available through VFS)

In-band deduplication (happens during writes)

 

I find that these features may be useful for rsync backups, mail archives, source archives, home directories, etc, etc.

 

I know my DVR box uses XFS.  For me I could see mixing usage.

Link to comment

For most of us, a 12 pack of beer is what it will cost a year.

 

Sounds like you have very expensive beer or uncommonly cheep electricity.

 

More common is a rule of thumb is 1 Watt running 7x24 costs $1 per year.

 

Twenty 10-Watt spinners is $200/year. 

 

And as others have mentioned, for many it is not the cost, but the noise and heat.  The noise issue is also demonstrated by the people looking for temperature-based fan speed control.

 

Different people value different features in different ways.  A critically useful feature to one person may be worthless crap to another person.  Those are thereby opinions, not facts, which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.  Therein lies the balancing act of software development.

Link to comment

Different people value different features in different ways.  A critically useful feature to one person may be worthless crap to another person.  Those are thereby opinions, not facts, which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.  Therein lies the balancing act of software development.

+1

 

The email feature is worthless to me.  I've gone months between checking for new emails before and I don't own a cell phone at all.  But I can see it would be very very valuable feature for some to have.

Link to comment

For most of us, a 12 pack of beer is what it will cost a year.

 

Sounds like you have very expensive beer or uncommonly cheep electricity.

 

More common is a rule of thumb is 1 Watt running 7x24 costs $1 per year.

 

Twenty 10-Watt spinners is $200/year. 

 

And as others have mentioned, for many it is not the cost, but the noise and heat.  The noise issue is also demonstrated by the people looking for temperature-based fan speed control.

 

Different people value different features in different ways.  A critically useful feature to one person may be worthless crap to another person.  Those are thereby opinions, not facts, which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.  Therein lies the balancing act of software development.

Totally agreed. Everyone defines value differently.

Link to comment

Different people value different features in different ways.  A critically useful feature to one person may be worthless crap to another person.  Those are thereby opinions, not facts, which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.  Therein lies the balancing act of software development.

+1

 

The email feature is worthless to me.  I've gone months between checking for new emails before and I don't own a cell phone at all.  But I can see it would be very very valuable feature for some to have.

 

What if a system speaker beep code was an option in your case?

Link to comment

Different people value different features in different ways.  A critically useful feature to one person may be worthless crap to another person.  Those are thereby opinions, not facts, which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.  Therein lies the balancing act of software development.

+1

 

The email feature is worthless to me.  I've gone months between checking for new emails before and I don't own a cell phone at all.  But I can see it would be very very valuable feature for some to have.

 

What if a system speaker beep code was an option in your case?

Might hear it - might not.  5 out of 6 servers are in basement.  I'm asleep for <1/3 of the day and at work >1/3 of the day.  I just check the arrays when I get home and periodically while I'm there.  I would rather not have any more beeps than I currently have with the UPS's and HDD controllers.
Link to comment

Different people value different features in different ways.  A critically useful feature to one person may be worthless crap to another person.  Those are thereby opinions, not facts, which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.  Therein lies the balancing act of software development.

+1

 

The email feature is worthless to me.  I've gone months between checking for new emails before and I don't own a cell phone at all.  But I can see it would be very very valuable feature for some to have.

 

What if a system speaker beep code was an option in your case?

Might hear it - might not.  5 out of 6 servers are in basement.  I'm asleep for <1/3 of the day and at work >1/3 of the day.  I just check the arrays when I get home and periodically while I'm there.  I would rather not have any more beeps than I currently have with the UPS's and HDD controllers.

 

and I'm a set it and forget it, let me know when something is broken type of guy.

 

I've had a few close calls where drives fell out of the array and I did not know it until i saw the array lighting up like a Dj light.

Then there are the few where there are pending sectors and I did not know it until it was too late.

I do regular parity checks, and they passed, It just so happens that a pending sector must have cropped up in the last parity check thus causing me to loose a whole drives data when the parity drive failed.

Link to comment

and I'm a set it and forget it, let me know when something is broken type of guy.

I would much rather be a "set it and forget it" guy but right now and for the foreseeable future I am always looking at the GUI and unMenu anyway while I migrate files.  Just spent the last 3 month moving ~30TB of date to my array and then moving ~150TBs of data around to get directories on same drives.  I was running out of recording space on my SageTV recording drives so I just moved a whole drive to a unRAID drive to free up space.  Then moved files around to merge folders spread over multiple drives.  Down to the last 3TBs to move from my SageTV servers now.

 

I've had a few close calls where drives fell out of the array and I did not know it until i saw the array lighting up like a Dj light.

Then there are the few where there are pending sectors and I did not know it until it was too late.

I do regular parity checks, and they passed, It just so happens that a pending sector must have cropped up in the last parity check thus causing me to loose a whole drives data when the parity drive failed.

That is something I worry about once my data migration is complete.  But my next project is to see if I can symlink across servers and change the name and datetime stamp of the links without affecting the file they are linked too.  If that is possible then I will spend the rest of this year and most of next year finding the best recording off my 4 main arrays and eliminating the duplicate off the other servers - if they exist.  I figure I might be able to drop to about 100TB of array space and have enough old drives and newly emptied ones to create a backup of everything.  That will likely be the project for year after next.  In the meantime I will be replacing older drives that start to have problems so I don't expect to be hands off on my arrays for several years yet.
Link to comment
Just an FYI to those that asked/brought this up...  I will be addressing v5 this week as well.

 

With tomorrow being a holiday.... 

 

Like many others, I don't keep my server at the bleeding edge, and I'm not romanced by virtualization.  So version 5 still matters.........

 

Where do we stand?

Link to comment

Sounds like you have very expensive beer or uncommonly cheep electricity.

 

More common is a rule of thumb is 1 Watt running 7x24 costs $1 per year.

 

Twenty 10-Watt spinners is $200/year.

 

Ahhh... I see what you do there. Dismissed my cocktail napkin math and that fact that I said RAID could sleep. You also used an extreme worst case scenario for all drives when it comes to wattage and also assumed they were reading / writing 24 / 7 / 365. I used a more realistic worst case scenario when I said 10 - 20 watts. Knowing that most newer drives (in the last 5+ years) use less than 10 watts. W/D Green drives = 4.8 watts when idle and 5.4 watts during read / writes. Seagate Barracuda = 6.4 watts when idle and 9.2 when reading / writing.

 

2 Seagate Barracuda's reading / writing for an hour = 18.4 Watts per hour.

 

RAID can and does sleep and I also indicated in my example that it sleeps all night and most of the day. But for the sake of the argument lets assume 8 hours a day for 365 days a year (which averaged out over a year is a lot higher than my actual usage).

 

2 Seagate Barracuda's Reading / Writing 8 hours a day = 147.2 Watts hour per day / 0.1472 kWh per day.

 

0.1472 kWh per day x 365 = 53.728 kWh per year.

 

My power company charges 4.5c kWh in the Winter and 8c kWh in the Summer. Let's add in taxes and other charges and only use Summer rates year round = 11 kWh

 

53.728 x 11c = $5.90 per year

 

That is a good rough estimate of my actual usage with some worst case built in. If I wanted to be anal I could have broke that all down using my SDDs drives (which is what my RAID 1 Cache Pool is) and use 2 Watts instead of the Seagate 9.2 Watt spinners.

 

Using the same 2 Seagate Barracuda drives, reading / writing 24 / 7 / 365 = $17.73 per year

 

Instead of using my RAID 1 Cache Pool (that you seemed to forget what my 12 Pack of Beer example was based on) and change it to 10 Drives in an unRAID set up:

 

10 Seagate Barracuda's reading / writing for 8 hours a day = 736 Watts / .736 kWh per day x 365 a year = 268.64 kWh per year x .11c = $29.55 per year

 

The ABOVE is a more likely scenario for most users here if they had all 10 drives in a single RAID array that spins down when not in use. Even still, the same 10 Seagate Barracuda drives above reading / writing 24 / 7 / 365 = $88.65 per year not the $200 per year "rule of thumb" figure you threw out.

 

Point is, if you have 10 drives in RAID / unRAID / SnapRAID / FlexRAID or 10 individual drives you are not talking about significant savings in money if one or 10 of the drives are running when the Server is in use. I think it's a great feature of unRAID but I do not think it's the holy grail because it does not extend the life of your drives (which is what a MAJORITY of people here think and why the obsess over drive spin down) and it doesn't save me a bunch of money.

 

And as others have mentioned, for many it is not the cost, but the noise and heat.  The noise issue is also demonstrated by the people looking for temperature-based fan speed control.

 

Agreed but lets not kid ourselves either. To some people the whole power / drive spin up / down is more of religion / OCD thing and not so much about noise / cost and they freely admit it. Same principle is being demonstrated all over the Linux world when comes to the GCC versus the new LLVM compiler. It's pretty much World War 3 going on in all the blogs, website, forums, communities and they are literally fighting over X taking Y milliseconds to compile Z.

 

Different people value different features in different ways.  A critically useful feature to one person may be worthless crap to another person.  Those are thereby opinions, not facts, which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.  Therein lies the balancing act of software development.

 

1. I am all about having choice / options.

 

2. Most PMs I get on here are from people thanking me for explaining / educating them on Linux, unRAID and various other technology / programs / etc. in a way they can understand.

 

3. I also never claimed I was stating a fact. When I do, I make sure I point that out in my posts. Not to mention, in the post we are speaking about I was very very sarcastic ( I didn't recycle my bath / toilet water or if polar bears were killed) so people knew it was my opinion.

 

4. I stated that I did not understand some people's obsession when it comes to drive spin down. I stated that If I have a bunch of VMs that I do not want to lose or recreate I would use fault tolerance (mirror 2 drives) and would not care if it costs me $5 a year. That is how I think and what I would do. However, to some of the people here, the fact they are using more power (even $5 a year) outweighs spending 8 hours rebuilding a bunch of VMs, losing the data within them and the downtime.

 

which illustrates the truism that nearly every sentence on an Internet forum should have "In my opinion..." inserted at the beginning.

If people cannot think for themselves or do not posses any deductive reasoning / critical thinking skills... A disclaimer isn't going to do a damn thing. I am not going to waste my time dumbing down every post and listing out in detail every "exception to the rule" for a mental retard.

 

Therein lies the balancing act of software development.

 

Agreed.

Link to comment

It's always been about heat and noise with me.  When I lived in Rockaway Beach, I did not have air conditioning in my office/studio.  Temperature could easily reach upper 90s on a summer day. In my new apartment at Coney Island, I have to pay for air conditioning, Dam straight I don't want heat/ac competing and wasting hard earned pennies.

 

Also since moving, one array is in BR and other is in LR, I don't want any noise.

Link to comment

I would much rather be a "set it and forget it" guy but right now and for the foreseeable future I am always looking at the GUI and unMenu anyway while I migrate files.  Just spent the last 3 month moving ~30TB of date to my array and then moving ~150TBs of data around to get directories on same drives.  I was running out of recording space on my SageTV recording drives so I just moved a whole drive to a unRAID drive to free up space.  Then moved files around to merge folders spread over multiple drives.  Down to the last 3TBs to move from my SageTV servers now.

 

To my earlier point about people having OCD about things like power, drive spin down, etc. and not taking into account "quality of life"...

 

You have storage across 6 servers in your basement. 3 ESXi Servers and 6 unRAID Servers (including unRAID VMs on your ESXi boxes) and you have what at least 60 or more drives?

 

WHY?!?!?!

 

Do you enjoy being miserable? Yeah a part of the problem is unRAID and being limited to 24 drives but still... You just spent 3+ Months moving data around to specific drives because you want directories / sharing to work all for what reason? For the sake of making sure your drives when watching X power down correctly?

 

Forget unRAID! Forget 6 Servers! Forget the 24 drive limit! Forget what drive has what on it! Forget having to spend 3+ Months moving data around!

 

You are nuts for not using a Distributed / Object File System across a Cluster. You get a million features / functionality that unRAID can't dream of doing plus you can go crazy with fault tolerance / High Availability and not limited to a 1 drive fault tolerance system. My advice is something like Ceph, GlusterFS, Lustre, MooseFS, etc.

 

Below is an example of a WebGUI for Ceph. There are several for it and the other Distributed / Object File Systems I listed above.

 

Look how easy it is to manage Ceph across multiple drives on multiple servers from one WebGUI

 

Dashboard

 

R6wM2IC.png

 

Workbench

 

vsIbX7a.png

 

Managing Hosts

 

oaN3TO7.png

 

Managing Pools

 

He3xJMu.png

 

Manage OSDs

 

71Li2F2.png

 

Graphs

 

uW3hrdo.png

 

That is something I worry about once my data migration is complete.  But my next project is to see if I can symlink across servers and change the name and datetime stamp of the links without affecting the file they are linked too.

 

What a Clusterfuck. This has to be one of the funniest things I have read on here considering we are talking about this across 6 servers and multiple unRAIDs.

 

If that is possible then I will spend the rest of this year and most of next year finding the best recording off my 4 main arrays and eliminating the duplicate off the other servers - if they exist.  I figure I might be able to drop to about 100TB of array space and have enough old drives and newly emptied ones to create a backup of everything.  That will likely be the project for year after next.  In the meantime I will be replacing older drives that start to have problems so I don't expect to be hands off on my arrays for several years yet.

 

You are stuck in 2005 and the whole "I don't own a cell phone" not wanting to think / adapt to new technology creates 100 times the amount of time and work plus costing you A LOT of money (which you think you are saving).

 

All those Distributed / Object File Systems and WebGUIs are FREE which saves you 6 unRAID licenses. Plus you could consolidate all of those drives to a couple of servers (if you wanted to include High Availability) or one (if you didn't want high availability) because you are can have more than 24 drives on one machine. That saves you A LOT of money on hardware equipment and power too. You also could take advantage of the many different ways you handle fault tolerance too. This alone would save you from MONTHS moving crap around and god knows how long to replace your data. Plus you get ALL KINDS of capability / features / etc. like replication, compression, deduplication, encryption, etc. that will make managing your data / servers easier and free up a lot of your time (quality of life stuff I was talking about).

 

Your ESXi boxes could connect to your NAS Servers (now 1 or 2) via Fiber if you wanted and your VMs could use FoE, iSCSI or AoE to attach instead of only having NFS or Samba or AFP.

 

Plus everything is Object Oriented and you can just move things around all through point and click. You SHOULD NOT be thinking at a drive level or care what is on drive 23 on Server 2.

Link to comment

That is something I worry about once my data migration is complete.  But my next project is to see if I can symlink across servers and change the name and datetime stamp of the links without affecting the file they are linked too.

 

What a Clusterfuck. This has to be one of the funniest things I have read on here considering we are talking about this across 6 servers and multiple unRAIDs.

They are on multiple servers because I have a Windows VM on each (2 are ESXi based use to be 3) where I record using SageTV.  The SageTV VMs have very distinct functions.  #1 records SD channels, #2 records HD channels, #3 records OTA channels, #4 not a VM is a backup for #1 and #2.  Total tuners among all servers is 36 capable of recording at one time.  If OTA and Satellite content providers would start and end a show consistently and when the guide data says it does I could have half that many. The living room SageTV server is also my playback device.  The majority of those 36 tuners are OTA because it is a crapshoot as to which tuner will tune a channel well enough that I can keep the recording.  I record the same OTA shows on multiple SageTV servers which is why I have multiple recordings.  I want the symlinks so that each instance of SageTV thinks it is the one it recorded even if it is a GOOD one on a different server.  The files from SageTV server #4 are on a 4th unRAID server that has an WHS2011 VM on it for backups.  The 5th unRAID server contains all my ripped content from BluRays and CDs because I want it separated out from my other ESXi unRAID server/Windows-SageTV server VMs and so that it is always available.  The 6th server is where all of the important files and backups from True Image reside.  The WHS2011 backups are themselves backed up to this server.  unRAID #6 is backed up to off line drives that I rotate to relatives houses.

 

If that is possible then I will spend the rest of this year and most of next year finding the best recording off my 4 main arrays and eliminating the duplicate off the other servers - if they exist.  I figure I might be able to drop to about 100TB of array space and have enough old drives and newly emptied ones to create a backup of everything.  That will likely be the project for year after next.  In the meantime I will be replacing older drives that start to have problems so I don't expect to be hands off on my arrays for several years yet.

 

You are stuck in 2005 and the whole "I don't own a cell phone" not wanting to think / adapt to new technology creates 100 times the amount of time and work plus costing you A LOT of money (which you think you are saving).

I don't have a cell phone because the number of dropouts and tinny sounds from callers on cell phones to my land line makes me not want to use one.  Other land lines to my land line were fine so I know it was the cell phone at the other end with the problem.  The on-call cell phone work provided me before I changed groups at work didn't help - company is too cheap to do that now and I'm not going to get a cell phone just because they won't provide one any more.  I would get a PDA (cell phone sans the phone line) or tablet but why it is just another device with a screen too small for me to read unless I put on reading glasses.  If I had a cell phone I might see a reason to check my email more frequently which is the only reason I mentioned that in the first place.

 

All those Distributed / Object File Systems and WebGUIs are FREE which saves you 6 unRAID licenses. Plus you could consolidate all of those drives to a couple of servers (if you wanted to include High Availability) or one (if you didn't want high availability) because you are can have more than 24 drives on one machine. That saves you A LOT of money on hardware equipment and power too. You also could take advantage of the many different ways you handle fault tolerance too. This alone would save you from MONTHS moving crap around and god knows how long to replace your data. Plus you get ALL KINDS of capability / features / etc. like replication, compression, deduplication, encryption, etc. that will make managing your data / servers easier and free up a lot of your time (quality of life stuff I was talking about).
Just remember I came from Windows environment so anything linux is greek to me.  unRAID was easy to setup and offered independent file systems on the drives and a little bit of fault tolerance.  Even with unRAID I took my time using it because it was linux based.  I looked at it two years earlier but didn't take the leap until I lost data on my hardware raid setup in Windows one too many times.  I finally switched when I saw that I could read the disks in Windows because unRAID had unique file systems on each disk with pooling.  The fault tolerance in unRAID gave me back what the hardware raid card gave me - the ability to recover from a single drive failure.  Are any of your free options Windows based?  Because that is still easier for me.  I stopped looking when I moved to unRAID so I will have to check our your ideas.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.