Jump to content

JBOD - UCS C Series Server


Go to solution Solved by JonathanM,

Recommended Posts

Hey folks, getting back into unraid, had some questions on the best setup when using enterprise grade gear.

 

I have a Cisco server, lots of storage.

 

Right now, I created 12 raid 0 drives, making 12 virtual disks, really just passing them through to unraid. Things I noticed, is I don't see much like temps and smart drive stats, but it works great. Things I want to test is failing a drive, swapping a drive to increase it size, any thoughts so far??

 

OR

 

Should I make them all JBOD, I assume that would work. Other then getting some smart drive info to unraid is that worth it?

 

Now if u know Cisco gear, regardless of either setup, I get access to the storage controller via CIMC,  so I get all sorts of stats and ability to manage the drives via CIMC if needed, super nice to turn on locater leds for example.

 

Any deal breakers or recommendations for either setup?

 

Secondly, what to do with 1.5 TBs of ram...one big ram disk??? 😀

 

 

 

Link to comment

totally understand why mixing hardware and software-based RAID is a bad idea, started searching the forums for some horror storys, came up lacking so far.

 

This build is a total test server, so going to keep playing with it, as all signs seem to be working great, I see how this would confuse the average user in more then one way dealing with, asking the  Joe to deal with virtual disks, and god help them if they actually did some type or RAID 1/5/6 and then stuck unraid on top of that would be horrible idea, but how i have this setup, its really just passing the drives to unraid as RAID0 which to me is similar to how people use LSI cards and flash them for JBOD and let unraid run it. 

Link to comment
  • Solution
1 hour ago, BOBOkingbbe9 said:

started searching the forums for some horror storys, came up lacking so far.

The biggest issue tends to be when it comes time to migrate to different hardware, Unraid identifies the drives with hardware identification, which RAID controllers don't pass intact. One of the biggest benefits of Unraid is the ability to move the drives to almost any plain vanilla platform, as long as the disk controller is a plain HBA the drives will be detected and used without any changes needed at all.

 

Sometimes RAID controllers alter the drive's geometry, which can put the partition at a location that isn't compatible when the drive is moved to different hardware, causing unmountable drives.

 

USB instead of SATA or SAS is very hit or miss, typically miss. The USB bus resets can and do cause writes to fail, and Unraid will drop the drive when a write fails.

 

Another issue for many people (not for you, you seem to have this taken care of) is that Unraid's ability to monitor and report on disk health is disabled by many RAID controllers. If a drive is failing, you want to investigate, and possibly replace the drive sooner rather than later. Unraid (like most RAID) requires the entire capacity of all the data drives and the parity drive to rebuild a failed drive. The reason it's super important with Unraid compared to other RAID setups, is that in the Unraid parity array disks are allowed to spin down when not accessed, which means a drive can possibly go months without being touched, but when another drive fails the spun down drives are all required to be read perfectly.

 

As long as you keep good backups you will be fine. The issue is simply that you are bypassing or hamstringing many of the features and safeguards built into Unraid to help protect your data. People tend to get upset when they lose data, so we try to mitigate as many causes as possible.

Link to comment

I have used a raid0 for parity and jbod for the rest.  Allowed me to double drive size so adding hdds later on I was not confined to 18TB (in my case) drives to add to array.  Also speed benefits in certain cases.

  The downside of course is losing one means loss of parity.

Link to comment
4 hours ago, JonathanM said:

The biggest issue tends to be when it comes time to migrate to different hardware, Unraid identifies the drives with hardware identification, which RAID controllers don't pass intact. One of the biggest benefits of Unraid is the ability to move the drives to almost any plain vanilla platform, as long as the disk controller is a plain HBA the drives will be detected and used without any changes needed at all.

 

Sometimes RAID controllers alter the drive's geometry, which can put the partition at a location that isn't compatible when the drive is moved to different hardware, causing unmountable drives.

 

USB instead of SATA or SAS is very hit or miss, typically miss. The USB bus resets can and do cause writes to fail, and Unraid will drop the drive when a write fails.

 

Another issue for many people (not for you, you seem to have this taken care of) is that Unraid's ability to monitor and report on disk health is disabled by many RAID controllers. If a drive is failing, you want to investigate, and possibly replace the drive sooner rather than later. Unraid (like most RAID) requires the entire capacity of all the data drives and the parity drive to rebuild a failed drive. The reason it's super important with Unraid compared to other RAID setups, is that in the Unraid parity array disks are allowed to spin down when not accessed, which means a drive can possibly go months without being touched, but when another drive fails the spun down drives are all required to be read perfectly.

 

As long as you keep good backups you will be fine. The issue is simply that you are bypassing or hamstringing many of the features and safeguards built into Unraid to help protect your data. People tend to get upset when they lose data, so we try to mitigate as many causes as possible.

Great call outs, exactly what I was looking for, technical level of details and things look out for, yeah moving these drives to another server, I could see would be a disaster, never thought of that, my guess is when that day comes for me, I'll be getting a new server that was decomm'ed, and moving the data via UNC, instead of moving drives, so would never deal with that issue.

 

 I messed with an HBA in a home computer once using SAS drives, it was a disaster, but this time, doing it all with a Cisco server, has been great.

Link to comment
3 hours ago, Veah said:

I have used a raid0 for parity and jbod for the rest.  Allowed me to double drive size so adding hdds later on I was not confined to 18TB (in my case) drives to add to array.  Also speed benefits in certain cases.

  The downside of course is losing one means loss of parity.

ha, never thought about using RAID to create a "large" parity disk to account for it being the largest drive in your pool.

 

I should be clear for anyone reading this, I used RAID0 created 12 vdisk's, each vidsk has ONE drive in it, so if drive fails, it will fail the vdisk, CIMC will alert me, and unraid will drop the drive. I'm little unsure if i swap it for a drive of the same size, what unraid will do. since its a vdisk, Unraid is likely to see it as the same drive, but will it be smart enough to know the data is missing and start a rebuild???? Would love some help on answering that question.

 

My test tonight is to fail a drive today, and replace it with a larger drive and see it rebuild, likely going to create a new vdisk to account for a larger drive size, also planning to fail some drives and convert to JBOD and just let that pass it right to unraid, I would think at that point i would get all the smart drive attributes.

 

I'm fortunate enough to have access to lots of large enterprise grade drives, but getting them to work in comsumer gear is a struggle. thus why i'm testing using them in what they came out of.

Link to comment

Honestly, at this point it would be good for you to just have a play. If you start the array with the failed disk unavailable, then the next disk that is assigned to that slot will be used for the rebuild. The emulated failed drive data should be available any time the array is started.

 

A larger drive will automatically expand during the rebuild, and for reasons that are obvious when you know how parity works, it's impossible to use a smaller drive to rebuild. The replacement drive can be any size from equal to the failed drive, up to and including the size of the smallest parity drive.

Link to comment
On 8/20/2024 at 10:18 PM, BOBOkingbbe9 said:

access to the storage controller via CIMC,  so I get all sorts of stats

If Unraid can monitor SMART, it can alert you by email or other agent as soon as a drive has SMART problems. Does your method let you get alerts, or do you have to actually look at your system to find out something is going on?

Link to comment
  • 2 weeks later...
On 8/22/2024 at 8:19 PM, trurl said:

If Unraid can monitor SMART, it can alert you by email or other agent as soon as a drive has SMART problems. Does your method let you get alerts, or do you have to actually look at your system to find out something is going on?

CIMC controller will alert me, as well as supports SNMP. Been putting this thing through the ringer and all good so far.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...