Ducky Posted August 24, 2018 Share Posted August 24, 2018 Hey All, I've got a query about my current system. I'm using an HP DL380G8 as my unRaid box, and currently because the on-board raid controller doesn't support JBOD, I'm presenting the drives as individual RAID-0 disks to unraid. My query is that with R0, this usually means if a drive fails all data on that disk is lost. Would this be the case with unraid? In theory if the controller sees a disk failure, then 'poof' it's all gone at the first stage, even before unraid can do anything about it. Am I correct, or can unraid recover from a failure even when using an R0 setup? If it can't and I am actually risking my data using the R0 setup, then I'm thinking of replacing the on-board raid card with a reflashed LSI9211-8i (still to get), would I need to basically build a second unraid server to 'save' all my data/settings to before migrating to the new box? I don't think I could do an in-place upgrade of the card to JBOD, because of the current R0 setup. Hope that makes sense? thx Quote Link to comment
JonathanM Posted August 24, 2018 Share Posted August 24, 2018 Do you have one of those R0 volumes assigned as parity in unraid? If so, then you can recover from 1 failed R0 volume in the array. Quote Link to comment
Ducky Posted August 24, 2018 Author Share Posted August 24, 2018 1 hour ago, jonathanm said: Do you have one of those R0 volumes assigned as parity in unraid? If so, then you can recover from 1 failed R0 volume in the array. I have two R0 volumes assigned as parity drives. So potentially I'm safe, and no need to bother moving to a JBOD setup (unless I want the Smart reporting etc)? Quote Link to comment
JonathanM Posted August 24, 2018 Share Posted August 24, 2018 Safe, in some sense of the word. There are a couple caveats, things you need to be aware of if you have failures. If you have a controller failure, or need to move the drives to another system, you may have issues getting the existing drives to be properly recognized with a plain HBA. Be very careful to keep track of drive assignment serial numbers so you can recreate it if necessary. Typically unraid keeps up that that automatically, but with your RAID controller interfering, you need to do that yourself. Without smart reporting, you are relying on the RAID controller to alert you to potential failures, so unraid will be unable to email you if a drive starts degrading, it will only react once the drive is bad enough to fail a write. Not being able to monitor smart puts you in a position of not knowing if you have multiple drives getting ready to fail. Maybe you can get smart status from your RAID BIOS? If so, I'd make it a habit to browse the status at least every few weeks. Quote Link to comment
Ducky Posted August 26, 2018 Author Share Posted August 26, 2018 So realistically whilst it works...it's not ideal and the JBOD option would be best. Do you know if Unraid is still unable to spin down SAS drives? I can get Smart info, but only if I drop in the HP diag menus, which isn't very practicable.... Wonder if I should bite just the bullet and move everything to a LSI9211. I do have a second server I could setup as a trial unraid I guess, and then copy everything across, rebuild the original with a 9211, and then migrate it all back.... Thanks for you input btw. Quote Link to comment
JonathanM Posted August 26, 2018 Share Posted August 26, 2018 There is a way to migrate without setting up another system, albeit not without its own risks. If you have full backups of anything you care about, then the risks are not bad. Install the controller in your current system, but don't connect all your drives to it initially. Move one drive at a time from the onboard RAID to the IT mode HBA and let unraid rebuild it. I'd start with the parity drive and work my way down. Before you actually start a rebuild, I'd do some testing. After you physically move the parity drive, do a new config, keeping all current assignments, then assign the newly named parity drive and select parity is already valid. Start the array, and see if you get any parity sync errors. If not, then you probably won't have to do ANY rebuilding, just a new config with all the drives new identifiers. See, it's possible (likely, actually) that the RAID0 profile passes through the entire drive untouched, with no data address translation. If that's the case, then all you need to do is tell unraid which drive belongs in which slot and let it check parity. Worst case is that the RAID controller only exposes a portion of the raw drive with the RAID 0 profile, typically to ensure that possible variances in replacement drive sizes could still be used in the RAID array. If that's the case, unraid would probably not be able to mount and use the drive without rebuilding it. As far as I know, spin down is still not available for SAS. However, many would argue that spin down is only useful for noise and power, drive longevity is actually better if you don't spin down. SAS drives are designed to live their entire life spinning. Quote Link to comment
Ducky Posted August 26, 2018 Author Share Posted August 26, 2018 The current controller connects to the SAS back plane (two cables) each controlling 16 drives, so I guess the above isn't going to be possible sadly... Agreed on not spinning down drives to improve their life span, it was just for power reasons...but currently I have half the drives ejected as they're not in use, but I could do the same approach on JBOD, it was just a query really. cheers :) Quote Link to comment
JonathanM Posted August 26, 2018 Share Posted August 26, 2018 47 minutes ago, Ducky said: The current controller connects to the SAS back plane (two cables) each controlling 16 drives, so I guess the above isn't going to be possible sadly.. Maybe, maybe not. With the correct mix of cables you should be able to individually connect drives. All you really need to accomplish is to find out if the RAID controller passes the whole drive through intact as the RAID 0 volume or not. If it does, then you just need to tell unraid the new drive id's and you won't need to rebuild anything. Quote Link to comment
JonathanM Posted August 26, 2018 Share Posted August 26, 2018 55 minutes ago, Ducky said: currently I have half the drives ejected as they're not in use Can you connect part of the backplane to the HBA and the other part to the onboard RAID? Quote Link to comment
Ducky Posted August 27, 2018 Author Share Posted August 27, 2018 20 hours ago, jonathanm said: Can you connect part of the backplane to the HBA and the other part to the onboard RAID? That should be feasible, the only issue I have is my cache drives are 23+24 (14 drives on each connector not 16), but I could get rid of them while I make the switch. Are you thinking move the drive to the other side of the backplane connected to the HBA, and basically do the same as you mentioned earlier? I need to buy the HBA card before I proceed, I'm a bit wary about the cheap ones from China on Ebay - are they dodgy? I've seen a UK seller with a genuine one for twice the amount, not too fussed for peace of mind Quote Link to comment
JonathanM Posted August 27, 2018 Share Posted August 27, 2018 The cache drives aren't going to matter as much, they will either mount or they won't, hopefully they will, either way it's not going to upset unraid like multiple missing array drives. If you can keep all the parity protected array drives on the RAID controller save one, then you can try moving the guinea pig without too much risk. I would avoid new chinese LSI controllers and prefer working server pulls. If they were once purchased with a server, you can pretty much assume they are genuine. You can almost assume one shipped from china is grey market at best, counterfeit at worst. Quote Link to comment
JonathanM Posted August 27, 2018 Share Posted August 27, 2018 Oh, also keep in mind that if you have modified the airflow with quieter fans, or the case has obstructions, that the LSI controller cards are meant to have constant air circulation. If they are put in a stagnant air pocket, they do put off a fair bit of heat (10-15W IIRC) and won't survive. Quote Link to comment
Ducky Posted August 27, 2018 Author Share Posted August 27, 2018 Yeah, I've heard horror stories about the chinese stuff! Will have a nosey for any 'server pulled' ones. No, it's stock server (in the garage) so noise wasn't an issue! lol Quote Link to comment
Ducky Posted August 30, 2018 Author Share Posted August 30, 2018 I picked up a Dell LSI9207-8i which I temp tested in another box with Unraid, works fine, so the next stage is to try one of the drives and see if the Raid0 is passed ok or not.... Will try as soon as I can, away this weekend. Quote Link to comment
tbonedude420 Posted September 11, 2018 Share Posted September 11, 2018 (edited) Assuming this issue is similar to the dell 400 cards (no jbod, but you can make single raid0's) you shouldent have too much trouble. IIRC, the h400 didnt actually make a raid partition, just a virtual raid0 partition (which isnt written to the drive it self) Think its called VHS? not sure. *googles* For example, I could take the drive out, put it in another system, and it just 'worked'. Granted.. windows couldn't read it due to being BFS or ZFS, not fat32/ntfs/nfs... Even if it is passed, you may not like the results.. such as the data strip size, cluster, ect.. Also, with the h400, I would get full performance of the drive in fake jbod mode, but under linux, i saw a noticeable difference. Once I formatted it normally, all was well again. Which now leads me to believe, it does write 'something' to the drive, maybe a mal-aligned mbr? Not sure. *googles again* Best of luck! Will follow for updates, hope you get some time this weekend to try Edited September 11, 2018 by tbonedude420 lack of google. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.