Migrate from pre-existing Windows machine with RST RAID 5 data drives?


TreyH

Recommended Posts

I think I may have made a very costly mistake... I was aware that Unraid cannot address disks that use Intel RST RAID. But I thought that just meant it couldn’t use them directly, when I’m now reading that it needs to be turned off in the BIOS/UEFI? That truly messes up my plans to use the Unassigned Devices plugin to migrate my RAID 5 data into Unraid, after buying enough new SSD’s and HDD’s for the purpose that I thought it “should” work....

 

The background: my workstation recently had a catastrophic unrecoverable Windows update error. I decided to capitalize on the need to reinstall to shift to Unraid with a Windows VM.

 

(I used that Windows install as my graphics workstation, and also as my Hyper-V hypervisor to run NAS, Docker images and an Ubuntu VM—so not having these other functions go belly-up whenever Windows crashed was another driver to go to Unraid, though Proxmox etc. might have worked for this need.)

 

While I’ve been unable to recover my Windows (10 Pro) install, or to do a non-destructive repair/recovery install, I can still access the filesystems, using either Windows Recovery or a live CD such as SystemRescue. I have approximately 5 TB of data, split between:

  1. Boot “disk”: Two Samsung EVO 1TB SSD’s in an Intel RST RAID mirror (approximately 900 GB used)
  2. Data “disk”: Three WD Red 3TB HDD’s in an Intel RST RAID 5 parity (approximately 4 TB used)

 

So, using spare drives I had around, I first unplugged all of the above, plugged in these random drives, and did a minimal Unraid install with a Windows VM, just to verify I could get Windows running with my GPU and other necessary peripherals passed through. This Unraid + Windows with virt passthrough test went fine.

 

Thinking this was going great, I then bought enough new disks so that, I would have space in Unraid for all the data on my two RAID sets, with (I thought) plenty of room for the data—once I got it off the boot and data “drives”, I could reformat them and put them to other purposes (on the Unraid or elsewhere). This includes an 8 TB HDD I bought for Unraid parity, assuming that I would never need a > 8TB drive on this machine.

 

So—I intended to use this migration process:

  1. Temporarily unplug these five disks,
  2. Plug in the new blank ones only,
  3. Set up Unraid, including shares and a Windows VM,
  4. Then plug those RST RAID disks back in, and use the Unassigned Devices plugin to migrate my data on those two logical “drives” into Unraid shares. (I worked most of my career in SRE engineering with big storage, so I was confident in my ability to get the data over, one way or another—whether using built-in or community-contributed Unraid tools, user scripts I’d write myself, or a guest VM to do the heavy lifting. I wasn’t worried about this part.)
  5. Once I had the data off and verified, remove the SSD’s to scavenge for other purposes, and reformat and add at least one of the three old HDD’s for additional storage headroom. (I planned to test running my Ubuntu VM, on which I do most of my development work, on an Unraid share, plus the other options offered by the UD plugin, such as a passed-through drive, and compare results before deciding what to do with the old disks.)

 

But now, as I’ve done more research before getting started, I’ve read on here that Unraid shouldn’t even be run with RST active in my BIOS (UEFI, I mean)—and trying to boot Unraid with RST active and those disk sets plugged-in could result in data loss? I hope perhaps these warnings are dated?

 

It’s possible to—with some care—break the boot RAID 0 mirror pair and just use one of the disks as an NTFS drive. So this I’m not worried about.

 

It’s the data on the larger RAID 5 set that I’m stuck on—I don’t have any other machine or large-enough staging disk to get its data out of the RST logical drive without playing a musical-chairs game, rebooting back-and-forth between Unraid (with my RAID disks unplugged and BIOS set one way) and SystemRescue (with my RAID disks installed and BIOS set the other way), moving a chunk that can fit on the “extra” 1TB drive at a time.

 

(My Ubuntu VHDX stored on the RAID 5—which I’d hoped to be able to use as the basis for a new Ubuntu VM—will take some special care, as it is, by itself, 1.5 TB—though I imagine I can build a custom rescue disk with the drivers necessary to “guestmount” a VHDX as a filesystem if I need to.)

 

Is there any sort of workaround to this painstaking and dreadful-sounding process at all—short of buying yet another drive large enough to hold all this data at once, so I can use a Live CD OS to copy the data over to before turning RST off? (I’d probably have to buy another PCI SATA card at that point, too...)

 

Could I safely run without parity long enough to use the 8TB drive as my staging area—since it could, in fact, hold everything on both logical drives with room to spare? It is by far the largest disk in the whole kit and caboodle, so it’s eventually destined to be the parity drive.

Edited by TreyH
punctuation, spelling
Link to comment
5 minutes ago, TreyH said:

Could I safely run without parity long enough to use the 8TB drive as my staging area

Parity is for emulating and rebuilding a missing disk. It's not required to have a parity disk.

 

However... it sounds like you don't have a backup strategy in place. Parity isn't backup, so even after you get migrated, you will still need a backup of anything you don't want to lose.

 

Given the complexity of your data migration, I highly recommend getting a backup in place before you risk erasing everything.

Link to comment
50 minutes ago, jonathanm said:

Parity is for emulating and rebuilding a missing disk. It's not required to have a parity disk.

 

However... it sounds like you don't have a backup strategy in place. Parity isn't backup, so even after you get migrated, you will still need a backup of anything you don't want to lose.

 

Given the complexity of your data migration, I highly recommend getting a backup in place before you risk erasing everything.

I have a network-based backup (though apparently—as I discovered—I can’t rebuild the boot OS from it, unfortunately, the data files are all there and fine from spot-checks)—but building a new installation from scratch based off of it would take many days (and be costly, as I’m on a storage plan where I pay for bulk transfer). I would truly rather use the media I already have installed here.

 

That said, someone on the official Discord has suggested that all I need do is change the Unraid boot USB to UEFI rather than legacy, and then UD will address the RAID volumes fine? This would make sense, as the same was necessary for SystemRescue and the Windows Recovery USB’s to mount them. I will give this a try tomorrow and report back if that’s all it is—though if someone reads this before that and knows this to be dangerous, I’d appreciate the warning.

Edited by TreyH
Link to comment
  • 2 weeks later...

So I tried this, first with an UEFI ArchLinux boot USB to verify I could still mount and access the data on my RAID 5 set okay (I could), and then booting 6.9.0-rc2 (created with the Unraid USB Creator with UEFI checked) on the same USB port. The Intel RST status on the BIOS screen showed everything green, but when I got into the UD settings, I got this (with the green highlighting the three drives in the RAID 5 set):

image.thumb.png.948b435009ea9c2dcdb27ca3c619213b.png

 

They showed up as separate drives (which, in itself, isn't necessary worrying, since ArchLinux sees them as four drives, each of the 3 raw 3TB disks, and then the 6TB logical disk), but only for the raw disks, not the RAID set. Only the one marked even had a "Mount" button, and it didn't work. The log messages were:

Feb 14 10:20:03 NAS unassigned.devices: Adding disk '/dev/sde1'...
Feb 14 10:20:03 NAS unassigned.devices: Mount drive command: /sbin/mount -t isw_raid_member -o rw,auto,async,noatime,nodiratime '/dev/sde1' '/mnt/disks/WDC_WD30EFRX-68E************RTHYY'
Feb 14 10:20:03 NAS unassigned.devices: Mount of '/dev/sde1' failed. Error message: mount: /mnt/disks/WDC_WD30EFRX-68E************RTHYY: unknown filesystem type 'isw_raid_member'. 
Feb 14 10:20:03 NAS unassigned.devices: Partition 'WDC_WD30EFRX-68E************RTHYY' cannot be mounted.

 

I went into the Linux console and checked, and unlike on Arch Linux where you can find these under /dev/md*, no /dev/md* devices exist on Unraid. Do I need to pass some kernel boot options or something?

 

 

Link to comment
16 hours ago, jonathanm said:

You will need to keep Unraid from seeing and trying to access the disks you want to mount. Set up a VM and pass through those disks so you can deal with them there.

 

Thanks, that makes sense, but following the guide for 6.9, I went to Tools → System Devices, and all my disks—my array disks and these Intel RST disks—are in the same IOMMU group. Which makes sense, as I don’t have a separate storage card but the built-in mobo one.

 

So how does one go about hiding devices rather than the entire IOMMU group?

Link to comment
3 hours ago, jonathanm said:

On the UD line item for the disks in question, click on the settings (three gears) and select the passthrough option.

Thanks, but if you look at the screen grab above, only one of the three 3 TB drives has the gear icon—what do I do about the other two?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.