Jump to content

Pre-purchase questions


Shadowflash

Recommended Posts

Posted

So I've been reading for a few days now and have a few questions most answers seem significantly dated on or I could not find at all.

 

1: Any limitation on processor or socket count?  For reference, tyan s4985 quad socket opteron in particular.

 

2: Hardware RAID cards (lsi8308elp as an example) cannot be used?  This seems a very strange and severe limitation if true.

 

3: No shared sound via host?  I know both KVM and XEN provide this, so again a strange limitation if true.

 

More to follow, but these 3 are the potential deal-breakers for me.

 

Thanks

Posted

So I've been reading for a few days now and have a few questions most answers seem significantly dated on or I could not find at all.

 

1: Any limitation on processor or socket count?

Summary of changes from 6.0-beta6 to 6.0-beta7

...

    CONFIG_NR_CPUS: changed from 16 to 256

...

 

For reference, tyan s4985 quad socket opteron in particular.

That board is based on the NVidia nForce4 chipsets, and I can't possibly recommend using anything less than nForce5.  nForce4 bridge chipsets were known for silent data corruption and other serious issues, finally fixed in nForce5 and later.  My own tests, confirmed by others too, showed corruption on the order of 1 bit corrupted per 4GB of data transferred through its busses.  Only one nForce4 motherboard I know of did not have these issues, the ASUS A8N (I believe), probably because ASUS worked very hard on its BIOS to work around the problems.  It may be possible that Tyan too has reworked the BIOS to avoid issues.

 

2: Hardware RAID cards (lsi8308 as an example) cannot be used?  This seems a very strange and severe limitation if true.

unRAID is a stripped down OS that boots and lives in RAM.  Only those drivers needed by users are included.  Tom has always been happy to add others on request though.  Yours is the very first request I have seen for the lsi8308, probably because there is almost no demand at all for hardware RAID cards when you have the unRAID advantages.  Yes, they can be higher performance, if you are willing to accept the limitations.

 

Can't help with your third question.  As far as I know though, most users seem happy with sound in their VM's, although some have had to troubleshoot it initially.

Posted

@RobJ

1:  I've used this MB for years as my hobbyist platform for all sorts of crazy things and have never had an issue (that I know of)

Moreover, most OS's including some Linux distros arbitrarily set socket count to maximum to 2, with some sort of enterprise version up-charge for quad or more, hence the need for clarification on cores vs sockets when the term "CPU" is used.

 

2:  this one I just don't understand I guess. There are plenty of mini flash loaded OSes that include the simple ability to add the driver. Size is no excuse. In my research I came across all these commonly requested features...

JBOD cache pool

RAID 0 cache pool

Additional parity protection

JBOD parity drive

RAID 0 parity drive

Various JBOD or RAID level member drives

Etc etc

Every single one of these desires are simply solved by hardware raiding before UnRaid drive presentation yet this seems not possible. The bandwidth bottleneck is the common excuse but with local VMs no such bottleneck exists. If the developer does not see the beauty and synergy of UnRaid on top of hardware RAID and treats each individual HBA as a feature request, I just don't know what to say.

 

3:  I was referring to shared sound between multiple VMs with one sound card only. Both XEN and kVM can do this in addition to the ability to pass through specific cards or GPU based HDMI if desired.

 

While initially this appeared to be an excellent piece of software, I think the purposeful handicaps to performance and common compatibility found in every other OS of this nature will mean this is not for me unfortunately.

 

Thanks for your time and sincerely hope the software matures enough to revisit at a later time.

Posted

So I've been reading for a few days now and have a few questions most answers seem significantly dated on or I could not find at all.

 

1: Any limitation on processor or socket count?  For reference, tyan s4985 quad socket opteron in particular.

 

2: Hardware RAID cards (lsi8308elp as an example) cannot be used?  This seems a very strange and severe limitation if true.

 

3: No shared sound via host?  I know both KVM and XEN provide this, so again a strange limitation if true.

 

More to follow, but these 3 are the potential deal-breakers for me.

 

Thanks

 

1) The highest number is 256 logic CPUs;

 

2) This is a shitty card, and the very true reason I came to unRAID (borked RAID5 array); Since these are RAID cards, they don't expose single hard drives unless you create a JBOD unit on each drive, and this is not what you want in unRAID. You want SAS/SATA HBA's in unRAID;

 

3) I this mean use of ALSA, then no, this is not supported by unRAID;

Posted

1:  I've used this MB for years as my hobbyist platform for all sorts of crazy things and have never had an issue (that I know of)

Great!  You're one of the lucky ones!  If I had that board, I would turn on copy verification of EVERYTHING copied to or from drives attached to that board.

 

Moreover, most OS's including some Linux distros arbitrarily set socket count to maximum to 2, with some sort of enterprise version up-charge for quad or more, hence the need for clarification on cores vs sockets when the term "CPU" is used.

As far as I know, there are no such limits in unRAID.

 

2:  this one I just don't understand I guess. There are plenty of mini flash loaded OSes that include the simple ability to add the driver. Size is no excuse. In my research I came across all these commonly requested features...

JBOD cache pool

RAID 0 cache pool

Additional parity protection

JBOD parity drive

RAID 0 parity drive

Various JBOD or RAID level member drives

Etc etc

Every single one of these desires are simply solved by hardware raiding before UnRaid drive presentation yet this seems not possible. The bandwidth bottleneck is the common excuse but with local VMs no such bottleneck exists. If the developer does not see the beauty and synergy of UnRaid on top of hardware RAID and treats each individual HBA as a feature request, I just don't know what to say.

We do have users with hardware cards set in JBOD or using it in RAID 0 or RAID 1, so long as it provides a standard interface transparently, or a driver was requested.  It's possible that the lsi8308 will work too, if set to JBOD, don't know without testing though.  I *think* we have users doing all or almost all of the choices above, but you have to remember that unRAID is primarily a RAID provider itself, a modified RAID 4.  Virtualization is a very new addition, development is ongoing.  Dual parity is expected in the next release, now in testing.

 

While initially this appeared to be an excellent piece of software, I think the purposeful handicaps to performance and common compatibility found in every other OS of this nature will mean this is not for me unfortunately.

 

Thanks for your time and sincerely hope the software matures enough to revisit at a later time.

That's fine, it's not for everyone.  Its original mission was as a NAS for home users, especially Windows users, primarily for backups and media storage.  It's now expanding beyond that, but is certainly not yet the "be all end all" of systems.  However there has never been a purposeful handicap to performance, just a different target, now being slowly expanded.

Posted

I'm not married to the 8308 ... Just the card that's attached and benchmarked atm to the 15k SASes I would like to throw at the project.

Aside from the laundry list of desired features and uses, the initial goal would be simply presenting  one or two raid 10 or 0 sets to the member side of the array in which I would assign VMs too. As I plan so run up to 4 simultaneous users, stealing room off the ssd could get pricey.

Multiple ways to do what I want, it just seems every way is blocked somehow. What controller cards are known to work as presenting raid sets to the array?  NOT outside the array, or functioning as a basic HBA.

Is that wear the "any areca or 3ware" comes from?

Posted

That board is based on the NVidia nForce4 chipsets, and I can't possibly recommend using anything less than nForce5.  nForce4 bridge chipsets were known for silent data corruption and other serious issues, finally fixed in nForce5 and later.  My own tests, confirmed by others too, showed corruption on the order of 1 bit corrupted per 4GB of data transferred through its busses.  Only one nForce4 motherboard I know of did not have these issues, the ASUS A8N (I believe), probably because ASUS worked very hard on its BIOS to work around the problems.  It may be possible that Tyan too has reworked the BIOS to avoid issues.

 

Finally home and can do proper postings ;)

 

I am curious on this, as my board was relatively common in Folding applications and I've never heard a mention of this.  I run a few Tyan Quad Socket boards and rarely have any issues.  I'm curious where this problem occurs.  Do you know if it's in the  NFP2200 or NFP2050 chipset (It's a dual chipset board)?  Now that I think about it, I don'y think I've ever used this particular board with any on-board ports, and I take very careful consideration on where my storage cards reside on which chipset, and using which HT links.  The way in which I use all my quad boards require me to minimize cross-node behavior and perhaps that's why I haven't seen this issue? 

 

I know I come across as an asshole, but decades of ignoring on-board anything other than "extra" NICs is a hard habit to break and being told that I now HAVE to rely on on-board ports (and all the limitations vs. BBU hardware raid that imposes) rubs me a little chafed ;)

 

The nerd in me so much wants to love this design for the advertised "One System to Rule Them All" mantra, and I see soooo much potential here to employ many old school tricks I haven't used in decades.

 

Multiple separate RAID 0 or 10 multi-drive 15k SAS 146GB Drives presented to the array in the same parity protected pool as my NAS drives (minimizing total drive dedication), with the deferred writes of a recycled SSD cache to be used for VM OS's is just too good to pass up if possible.  Read speed of the 15k hardware array, write speed of the SSD... sooo much potential for this very real scenario.

 

Add in JBOD SSDs for recycled SSD Cache Pools

Add in RAID 1E Parity drives for recycled Raptors

Etc. Etc.

 

I understand the the thinking of the masses here in that the original intent was an entry level alternative to hardware RAID, but wtf?  No one sees the massive performance potential here?

Posted

I think you may have missed the unRAID part of unRAID. It is not RAID. It has realtime parity for mixed size drives. There is no striping, which means that each disk is an independent filesystem, so a failure of one disk has no impact on the data on other disks. These are all considered to be features that make unRAID different and better in some ways. Read/write performance is not the point.

Posted

I think you may have missed the unRAID part of unRAID. It is not RAID. It has realtime parity for mixed size drives. There is no striping, which means that each disk is an independent filesystem, so a failure of one disk has no impact on the data on other disks. These are all considered to be features that make unRAID different and better in some ways. Read/write performance is not the point.

 

I most certainly did NOT miss that "feature".  You, like others, obviously fail to realize the potential here. I absolutely understand the difference. The potential lies in leveraging that against industry standard solutions  to create unique and far better scenarios.

Those same features you describe are far more powerful when coupled with hardware raid.

 

The fact there is no striping in unraid is the beauty of Tier'd Raiding.  Software over hardware has always been the best solution.

 

Posted

I think you may have missed the unRAID part of unRAID. It is not RAID. It has realtime parity for mixed size drives. There is no striping, which means that each disk is an independent filesystem, so a failure of one disk has no impact on the data on other disks. These are all considered to be features that make unRAID different and better in some ways. Read/write performance is not the point.

 

I most certainly did NOT miss that "feature".  You, like others, obviously fail to realize the potential here. I absolutely understand the difference. The potential lies in leveraging that against industry standard solutions  to create unique and far better scenarios.

Those same features you describe are far more powerful when coupled with hardware raid

There are other ways to get VMs and dockers. Why not just do RAID if that's what you want? I suppose you could make each data disk and parity that unRAID "sees" be actually a striped RAID array and speed things up, but wouldn't it be a lot simpler and perhaps not even more expensive to just use a single SSD for each of the disks that unRAID "sees"?
Posted

I think you may have missed the unRAID part of unRAID. It is not RAID. It has realtime parity for mixed size drives. There is no striping, which means that each disk is an independent filesystem, so a failure of one disk has no impact on the data on other disks. These are all considered to be features that make unRAID different and better in some ways. Read/write performance is not the point.

 

I most certainly did NOT miss that "feature".  You, like others, obviously fail to realize the potential here. I absolutely understand the difference. The potential lies in leveraging that against industry standard solutions  to create unique and far better scenarios.

Those same features you describe are far more powerful when coupled with hardware raid

There are other ways to get VMs and dockers. Why not just do RAID if that's what you want? I suppose you could make each data disk and parity that unRAID "sees" be actually a striped RAID array and speed things up, but wouldn't it be a lot simpler and perhaps not even more expensive to just use a single SSD for each of the disks that unRAID "sees"?

 

To put it simply, it's the "mover" and the SSD cache of UnRaid, coupled with the insane ability to allocate users to specific shares (which in my wish list case, denotes specific performance characteristics).  Plus the ability to use non-uniform disk sizing (which pre-OS hardware Raid levels would enjoy the benefit of). The global parity + user share allocation is huge, but ONLY when coupled with specific hardware Raid volumes from a performance standpoint. 

 

Edit:  Don't tell me to modify my expectations when this software is being advertised as "One System to Rule Them All" and as a solution to couple a high performance gaming computer and NAS, AND a VM hypervisor server!  I understand old forum fogies are used to justifying there shit from a puristic NAS point of view, but that is no longer the advertised case.  Expect this.  This softawre above all others I have researched has the potential to succeed, but really, this forum from what I've seen needs to understand where new users are coming from soley due to the "pitched" features.

Posted

 

Edit:  Don't tell me to modify my expectations when this software is being advertised as "One System to Rule Them All" and as a solution to couple a high performance gaming computer and NAS, AND a VM hypervisor server!  I understand old forum fogies are used to justifying there shit from a puristic NAS point of view, but that is no longer the advertised case.  Expect this.  This softawre above all others I have researched has the potential to succeed, but really, this forum from what I've seen needs to understand where new users are coming from soley due to the "pitched" features.

 

If you want hardware RAID so badly, you can use Areca cards, they are compatible. Make your own array and add it as "disk1", and leave the parity drive empty.

Posted

What I think most of us unRAID-centric people are missing though is that he is looking at this from a different perspective, new to us - an array of arrays.  Take a simple case, a RAID 0 parity 'drive' with one or two Raid 0 data 'drives'.  If a drive failed in one of the RAID arrays, then that array is normally lost, but this would allow a user to repair the broken array, then bring it back online and get unRAID to refill it.  That's a very interesting scenario, one I doubt anyone here has thought of before.

 

Would it work?  I suspect not yet, but it would be interesting to see what would need to change to enable that kind of usage.  Just need someone with massive amounts of test hardware!  And Tom's support of course.  But could open up a huge market possibly...

Posted

What I think most of us unRAID-centric people are missing though is that he is looking at this from a different perspective, new to us - an array of arrays.  Take a simple case, a RAID 0 parity 'drive' with one or two Raid 0 data 'drives'.  If a drive failed in one of the RAID arrays, then that array is normally lost, but this would allow a user to repair the broken array, then bring it back online and get unRAID to refill it.  That's a very interesting scenario, one I doubt anyone here has thought of before.

 

Would it work?  I suspect not yet, but it would be interesting to see what would need to change to enable that kind of usage.  Just need someone with massive amounts of test hardware!  And Tom's support of course.  But could open up a huge market possibly...

 

Yes, and no matter how many different kinds of base arrays you have, they are all protected in the master UnRaid array by your one global parity drive (or set) AND have the deferred SSD write cache.

Member Raid 10's with zero write penalty

Member Raid 0's with parity

Span Raids across multiple controllers (one card handling the parity and cache sets, while another handles the members)

 

I've still yet to hear if an Areca compatible card can do this, because every response throughout these forums always follows it up with a contrary statement like "don't add a parity drive" or "mount it outside the array set".  I don't understand why the decision to use UnRaid must exclude every other form of Raid when they should be complimentary.  Every external review I've read (and the comments) complain universally about performance or parity protection.  This strategy would be a feature far exceeding what's available to consumers in terms of performance AND protection.  I don't understand why it's met with such resistance.

 

The concept isn't new and is relatively common when spanning multiple controller cards to mirror or stripe in software.  What IS new, is UnRaid's non-reliance on uniform drive size (or in this case, independent hardware arrays).  Previously one would usually only software over hardware Raid when racks of uniform drives were employed.  I did this as a hobby project with 4x Compaq 14-drive SCSI arrays a lonnnggg time ago using a MegaRaid and a Compaq card using all 4 channels on each with 7 drives on each just for bragging rights.  It was very loud, but fast ;)

Posted

From my tests years back with an Areca controller, you can have multiple raid sets with multiple drives or with a pair of drives.

In my initial configuration I had a SAFE raid setup.

 

RAID 0 on the outer tracks with two drives. (parity)

RAID 1 on the inner tracks with the same two drives. (cache)

 

unRAID saw both RAID sets as individual drives.

 

In addition, when testing some of the silicone steelvine chipsets that did this in hardware, unRAID saw the RAID0/RAID1 pair as individual drives.

 

When people mention mounting high speed protected raid devices outside of the array it's for performance reasons.

I.E. so the high speed raid set does not feel the performance penalty of the parity device.

If that's not an issue a RAID0/RAID10 raid set can be mounted as one of the unRAID data drives.

Posted

From my tests years back with an Areca controller, you can have multiple raid sets with multiple drives or with a pair of drives.

In my initial configuration I had a SAFE raid setup.

 

RAID 0 on the outer tracks with two drives. (parity)

RAID 1 on the inner tracks with the same two drives. (cache)

 

unRAID saw both RAID sets as individual drives.

 

In addition, when testing some of the silicone steelvine chipsets that did this in hardware, unRAID saw the RAID0/RAID1 pair as individual drives.

 

When people mention mounting high speed protected raid devices outside of the array it's for performance reasons.

I.E. so the high speed raid set does not feel the performance penalty of the parity device.

If that's not an issue a RAID0/RAID10 raid set can be mounted as one of the unRAID data drives.

 

Thank you so much for the reply and it does help me.  Isn't by using an SSD Cache, mounting a member Raid set inside the UnRaid array potentially a performance enhancement though?  The parity isn't a penalty as it's deferred by the mover on writes?  For example, my use case....

Parity: 3x 300GB Raptors Raid 0

Member: 4x 15k SAS 146GB Raid 10 (for VM OS's)

Member: Various single Drives, 640GB and such... for Storage

Cache: SSD

 

Now the RAID 10 set in question, when mounted as part of the global UnRaid array would have..

Read Speeds of the Hardware Raid 10

"Percieved" Write speeds of the SSD (which is faster than 2x SAS drives)

Off-hour Parity as an additional protection layer.

 

Is this correct?  I see only performance gains from running it inside the UnRaid Array, and only see a performance downside when the write speed of the Hardware Raid member set exceeds the write speed of the SSD.

 

Obviously there are downsides to be considered as losing the spin-down ability of the parity set, as I doubt that would work through the Raid Card (and that may be a very good reason NOT to Hardware Raid the parity drive).  The other big disadvantage potential I see is lack of the software management features of the Cards used.  All Raid sets would probably be forced to be Bios only interaction, as there would be no driver once you got to the VM level.

Posted

The architecture of unRAID is always going to preclude good write performance.  Any write to an array disk requires 4 I/O operations (reads to the data and parity disk, followed by writes to both the data and parity disks).  I agree that any of those disks could actually be a RAID array if handled at the hardware level and presented to unRAID as a single drive but there are still 4 I/O operations for each write at the unRAID level.

Posted

Unraid "cache" speed is only applicable to new files created in a parity protected user share that has cache use set to yes. Writes to an already existing file on the parity protected array will be speed restricted to the real time parity generation process. The newly written files on the cache drive will be moved to their permanent position on the parity protected array by the "mover" script, which by default is scheduled to occur once a day, but can be changed to whatever cron schedule you wish.

 

So, new files, cache speed until moved. Modified files, parity limited.

 

For static media files this works great, write once, read many. For constantly changing files such as VM virtual disk files, either specify that the user share is cache only, so the files stay on the cache, or point them to a disk (volume) mounted external to the parity protected array.

Posted

The architecture of unRAID is always going to preclude good write performance.  Any write to an array disk requires 4 I/O operations (reads to the data and parity disk, followed by writes to both the data and parity disks).  I agree that any of those disks could actually be a RAID array if handled at the hardware level and presented to unRAID as a single drive but there are still 4 I/O operations for each write at the unRAID level.

 

However, as noted above, if you're only concerned with cached write speeds, those 4 I/O's are essentially irrelevant => even though they will in fact be MUCH faster than with typical arrays due to the high-performance RAID "disks" that are the actual array members.

 

Posted

Unraid "cache" speed is only applicable to new files created in a parity protected user share that has cache use set to yes. Writes to an already existing file on the parity protected array will be speed restricted to the real time parity generation process. The newly written files on the cache drive will be moved to their permanent position on the parity protected array by the "mover" script, which by default is scheduled to occur once a day, but can be changed to whatever cron schedule you wish.

 

So, new files, cache speed until moved. Modified files, parity limited.

 

For static media files this works great, write once, read many. For constantly changing files such as VM virtual disk files, either specify that the user share is cache only, so the files stay on the cache, or point them to a disk (volume) mounted external to the parity protected array.

 

Thanks.  I had falsely assumed any write operation would be cached, but that does make sense in the traditional parity limitations.

 

I'll have to carefully consider pros and cons, as 3x raptors will write parity as fast as 2x 15ks, but as these disks simply represent what i have laying around to use as a performance testbed, I do need to think about expansion eventually, not to mention keeping at least 7 drives without spin-down (not like I'm not used to that, but still...).

 

Thanks for the good conversation now everyone.  I know we started off rough, but I'm starting to feel a lot better about this community ;)

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...