Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

Gave a real quick try before bed. Found two drives. Was going to move all the data off the one drive into a folder named "mike" using Midnight Commander. Within 2 minutes it would fail and tell me file cannot be created. Something about read only file system again. I use to move loads of files via MC all the time and never came across this, so this may be something new. Maybe when they are created they automatically turn into read only. And of course I'm signed in as root in the terminal. Just have to figure out why it is doing that, because I can copy the data off a drive pretty quickly and get this XFS going pretty quick. Then I couldn't even delete the files that were copied. Only way I could do that was reboot the system, which made them NOT read only. Must be a flag somewhere telling unraid to make newly created files within the shell to a readonly or something related to that. I'm seeing doubles now. Maybe not such a good idea to keep tinkering.

 

You should probably check the syslog.  If file system corruption is detected, the file system is changed to read only.  You may want to check the file system on the destination drive you were copying to.

Link to comment

Gave a real quick try before bed. Found two drives. Was going to move all the data off the one drive into a folder named "mike" using Midnight Commander. Within 2 minutes it would fail and tell me file cannot be created. Something about read only file system again. I use to move loads of files via MC all the time and never came across this, so this may be something new. Maybe when they are created they automatically turn into read only. And of course I'm signed in as root in the terminal. Just have to figure out why it is doing that, because I can copy the data off a drive pretty quickly and get this XFS going pretty quick. Then I couldn't even delete the files that were copied. Only way I could do that was reboot the system, which made them NOT read only. Must be a flag somewhere telling unraid to make newly created files within the shell to a readonly or something related to that. I'm seeing doubles now. Maybe not such a good idea to keep tinkering.

 

You should probably check the syslog.  If file system corruption is detected, the file system is changed to read only.  You may want to check the file system on the destination drive you were copying to.

 

 

This is disk 4. bitmap block corrupted? This drive went through hell with HD Tune on my PC. Multiple LONG sector by sector erases/writes/erases/clears then over again. I'm %100 positive this drive is %100 solid.

Next test would be to randomly pick another drive and see if the same thing just happens. If it happens for all the drives, then there could be a bug with my system when trying to copy with rsync.

 

May  8 00:05:01 SUN avahi-daemon[1953]: Service "SUN" (/services/smb.service) successfully established.

May  8 00:37:01 SUN kernel: REISERFS error (device md4): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 308248576 is corrupted: first bit must be 1

May  8 00:37:01 SUN kernel: REISERFS (device md4): Remounting filesystem read-only

May  8 00:37:01 SUN kernel: REISERFS warning (device md4): clm-6006 reiserfs_dirty_inode: writing inode 4666 on readonly FS

May  8 00:50:42 SUN kernel: mdcmd (59): nocheck

May  8 00:50:42 SUN kernel: md: nocheck_array: check not active

 

Only thing I can find, but I think it is wrong. It also get stuck in a loop cycle trying to unmount: After booting back up normal those READ ONLY turn into normal and I'm able to delete them. So when Rsync fails, there are serious bugs that pop up. Almost makes me want to just do everything with Window clients instead. I use to move big blocks of data with MC all the time on 5.0.6, V6 seems very unstable with moving files within disk shares.

 

I...Stop SMB...Spinning up all drives...Sync filesystems...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...

Link to comment

This is disk 4. bitmap block corrupted? This drive went through hell with HD Tune on my PC. Multiple LONG sector by sector erases/writes/erases/clears then over again.

 

May  8 00:37:01 SUN kernel: REISERFS error (device md4): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 308248576 is corrupted: first bit must be 1

May  8 00:37:01 SUN kernel: REISERFS (device md4): Remounting filesystem read-only

May  8 00:37:01 SUN kernel: REISERFS warning (device md4): clm-6006 reiserfs_dirty_inode: writing inode 4666 on readonly FS

 

Yep, that's a corrupted Reiser file system.  Whenever you start up, it's going to mount it normally, until you do the particular file I/O that touches the file system corruption, then it's going to remount it read-only.  Please run Check Disk File systems on Disk 4.

Link to comment

This is disk 4. bitmap block corrupted? This drive went through hell with HD Tune on my PC. Multiple LONG sector by sector erases/writes/erases/clears then over again.

 

May  8 00:37:01 SUN kernel: REISERFS error (device md4): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 308248576 is corrupted: first bit must be 1

May  8 00:37:01 SUN kernel: REISERFS (device md4): Remounting filesystem read-only

May  8 00:37:01 SUN kernel: REISERFS warning (device md4): clm-6006 reiserfs_dirty_inode: writing inode 4666 on readonly FS

 

Yep, that's a corrupted Reiser file system.  Whenever you start up, it's going to mount it normally, until you do the particular file I/O that touches the file system corruption, then it's going to remount it read-only.  Please run Check Disk File systems on Disk 4.

Also note that file system corruption is not necessarily related to hard drive failure.
Link to comment

This is disk 4. bitmap block corrupted? This drive went through hell with HD Tune on my PC. Multiple LONG sector by sector erases/writes/erases/clears then over again.

 

May  8 00:37:01 SUN kernel: REISERFS error (device md4): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 308248576 is corrupted: first bit must be 1

May  8 00:37:01 SUN kernel: REISERFS (device md4): Remounting filesystem read-only

May  8 00:37:01 SUN kernel: REISERFS warning (device md4): clm-6006 reiserfs_dirty_inode: writing inode 4666 on readonly FS

 

Yep, that's a corrupted Reiser file system.  Whenever you start up, it's going to mount it normally, until you do the particular file I/O that touches the file system corruption, then it's going to remount it read-only.  Please run Check Disk File systems on Disk 4.

Also note that file system corruption is not necessarily related to hard drive failure.

 

Got it. Like a Windows checkdisk. I only copied less then 20 MB and it failed every time. I guess there is no harm in doing at least just a --check on all the disks....because I want to make sure my source filesystem is good too. Not being able to copy just 1 TB with MC I never had a problem with. I even opened the box up, checked all connections, tested my PSU, ran a long smartest on the same drive. All good. Now I'm afraid to even try to copy again.

 

Next time I'll try a block of files first, maybe I can boil down where it fails. Very hard to figure out since it doesn't output anything in the syslog, except AFTER the filesystem is flagged bad and then set to read only.

 

EDIT:

Found 1 corruption. Fixing now. I hope it will tell me what was corrupt.

 

EDIT 2:

Fixed. :)

 

 

Link to comment

10 - Now is the good time to move the files in the "t" directory to the root on [dest]. I do this with cut and paste from Windows explorer.

 

11 - Stop the array (no need to delete anything from the [source])

 

12 - Go back to step 2.  Note that this isn't a race - you can do it at your leisure of the course of days, weeks, or months. I do one or two a week or so.

 

Greetings,

 

I've been reading and re-reading this thread over the last couple of weeks.  I have two questions:

 

1) Before I can add a new, larger drive, I need to move data around so that I can remove a drive.  I figure it's about about 600 GB of data.  Would it be best to shut down the array to move that data?  I assume in that case I'm safe to just telnet to the unRAID server and use the following rsync commands to move the data into the existing hierarchy, without having to create a temporary subdirectory that isn't part of the user share, correct?

 

rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/

 

At that point, I should be able to start following the guide, I think. 

 

2) Where does pre-clear of the new drive get done?  Is it before the steps in bjp999's guide?  It is mentioned, but it is above the steps, so i wasn't sure.

 

3) Also - in the steps above, as the array is running, are you at risk of duplicate files, and confusing things between steps 10 and 11?

 

Thanks,

 

John

 

Link to comment

I have been slowly switching over to XFS. Just been a slow process and time consuming, but that's how it is...and I'm also an impatient person. I read pros and cons about all the file systems and had no real reason to switch over to XFS but I wanted to give it a try from hearing it may be a little more efficient then rieserFS. I hope! :)

 

Anyway, I'm using MC to MOVE files to a temp folder and once done, reformat empty drive to XFS then MOVE back. Would a copy perform better you think, then delete manually? I'm asking now since I'm about halfway done with a 12 hour copy and I don't want to stop it. I did forget to use screen this time, but if it disconnects I can just restart the move.

Link to comment

10 - Now is the good time to move the files in the "t" directory to the root on [dest]. I do this with cut and paste from Windows explorer.

 

11 - Stop the array (no need to delete anything from the [source])

 

12 - Go back to step 2.  Note that this isn't a race - you can do it at your leisure of the course of days, weeks, or months. I do one or two a week or so.

 

Greetings,

 

I've been reading and re-reading this thread over the last couple of weeks.  I have two questions:

 

1) Before I can add a new, larger drive, I need to move data around so that I can remove a drive.  I figure it's about about 600 GB of data.  Would it be best to shut down the array to move that data?  I assume in that case I'm safe to just telnet to the unRAID server and use the following rsync commands to move the data into the existing hierarchy, without having to create a temporary subdirectory that isn't part of the user share, correct?

 

rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/

 

At that point, I should be able to start following the guide, I think. 

 

2) Where does pre-clear of the new drive get done?  Is it before the steps in bjp999's guide?  It is mentioned, but it is above the steps, so i wasn't sure.

 

3) Also - in the steps above, as the array is running, are you at risk of duplicate files, and confusing things between steps 10 and 11?

 

Thanks,

 

John

 

You've got to different issues, replacing a disk with a bigger disk, and converting your file systems.

 

The thing to keep in mind is that if you rebuild from parity your array will rebuild that disks with the current filesystem, you can't switch file systems and rebuild at the same time. So what you will need to do is either convert the disk to XFS first then swap for the bigger disk, or swap for the bigger disk then convert to XFS.

 

If you've got space issues I think the second path makes more sense...

 

Answers to questions:

 

1) If you've got space on the destination disk for the files on the source disk that should work just fine.

2) This depends on how you want to / are able to do this process. You might be better off, pre-clearning your disk, replacing the smaller disk, having it rebuild, then start the conversion process to XFS. In that case you'll have wanted to pre-clear before moving any data.

3) You should be fine at step 10 / 11 since you will have already formated the disk you moved the data off of.

Link to comment

You've got to different issues, replacing a disk with a bigger disk, and converting your file systems.

 

The thing to keep in mind is that if you rebuild from parity your array will rebuild that disks with the current filesystem, you can't switch file systems and rebuild at the same time. So what you will need to do is either convert the disk to XFS first then swap for the bigger disk, or swap for the bigger disk then convert to XFS.

 

If you've got space issues I think the second path makes more sense...

 

Hmm...  Maybe I'm missing something.  I was not planning on rebuilding a drive from parity.  My thought was to empty out one drive by moving it's contents to another existing drive, remove the empty drive and replace it with a larger drive, pre-clear the new larger drive, format it with XFS, and then move data to it from another existing drive.  Once that is done, I would continue a similar process until the data has been migrated to drives formatted with XFS.  In particular here is the layout:

 

Current state:                                  Future State:

 

Parity:  2 TB                          (keep drive)                Parity: 2 TB         

Disk1: 1 TB RFS (60% full)    (replace drive)            Disk1: 2 TB XFS

Disk2: 1 TB RFS (40% full)    (replace drive)            Disk2: 2 TB XFS

Disk3: 2 TB RFS (20% full)    (keep drive)                Disk3: 2 TB XFS

 

My thought process is as follows:

Move all data from Disk1 to Disk3, remove current Disk1. 

Install replacement for Disk 1

Move all data from Disk2 to new Disk 1, remove Disk2.

Install replacement for Disk 2

Move all data from Disk3 to new Disk 2, re-format Disk3.

Move some data from Disks 1 and 2 to Disk 3 to even out the disks somewhat.

 

Does that make sense?

 

Thanks,

 

John

 

 

 

Link to comment

@johnO

 

I replace my parity and added two drive and what gundamguy describe happens to me. Two new drives are xfs and the other rfs which makes parity rfs

So I'm left with what he suggested if moving data to a different disk and converting and moving back.

 

Just to clarify -- maybe I should not use the word "replace" but instead, I'll be add new drives to the array, and remove old drives from the array (with one removed/re-added drive being the same physical mechanism).

Link to comment

@johnO

 

I replace my parity and added two drive and what gundamguy describe happens to me. Two new drives are xfs and the other rfs which makes parity rfs

So I'm left with what he suggested if moving data to a different disk and converting and moving back.

 

Just to clarify -- maybe I should not use the word "replace" but instead, I'll be add new drives to the array, and remove old drives from the array (with one removed/re-added drive being the same physical mechanism).

If you add a precleared data drive then parity is still valid. If you rebuild a data drive then parity is still valid. If you remove a data drive parity will be invalid and you will have to rebuild parity.

 

If you empty a data drive, then you can format it to a different file system and parity is still valid. So, you could rebuild an empty drive then format it, or format it then rebuild it. Either way parity is still valid.

 

So, you can either rebuild a data drive or rebuild parity.

 

Link to comment

Hello all, I am currently in the process of moving my old Unraid 5 server to a new Unraid 6 build.  Key facts are I don't have a single spare drive that is equal to the largest drive in the old array (I simply have a couple of external 1TB drives) and I want to make use (of course) of XFS on the Unraid 6 server.

 

Is it possible for me to take my 3TB parity out of the old server and transition that to the new server, whilst still being able to access the old server and copy data to the external drives (array will be unprotected)?

Once I have cleared off a 2TB data drive from the old server, and place it in to the new one, do I need to pre-clear this again, or just re-format from the GUI as XFS?

 

Many thanks for the advice. :)

Link to comment

You've got to different issues, replacing a disk with a bigger disk, and converting your file systems.

 

The thing to keep in mind is that if you rebuild from parity your array will rebuild that disks with the current filesystem, you can't switch file systems and rebuild at the same time. So what you will need to do is either convert the disk to XFS first then swap for the bigger disk, or swap for the bigger disk then convert to XFS.

 

If you've got space issues I think the second path makes more sense...

 

Hmm...  Maybe I'm missing something.  I was not planning on rebuilding a drive from parity.  My thought was to empty out one drive by moving it's contents to another existing drive, remove the empty drive and replace it with a larger drive, pre-clear the new larger drive, format it with XFS, and then move data to it from another existing drive.  Once that is done, I would continue a similar process until the data has been migrated to drives formatted with XFS.  In particular here is the layout:

 

Current state:                                  Future State:

 

Parity:  2 TB                          (keep drive)                Parity: 2 TB         

Disk1: 1 TB RFS (60% full)    (replace drive)            Disk1: 2 TB XFS

Disk2: 1 TB RFS (40% full)    (replace drive)            Disk2: 2 TB XFS

Disk3: 2 TB RFS (20% full)    (keep drive)                Disk3: 2 TB XFS

 

My thought process is as follows:

Move all data from Disk1 to Disk3, remove current Disk1. 

Install replacement for Disk 1

Move all data from Disk2 to new Disk 1, remove Disk2.

Install replacement for Disk 2

Move all data from Disk3 to new Disk 2, re-format Disk3.

Move some data from Disks 1 and 2 to Disk 3 to even out the disks somewhat.

 

Does that make sense?

 

Thanks,

 

John

 

Your process does make sense, and will work, but as Trurl explained when you remove a data disk from the array you are going to invalidate parity and be prompted to rebuild your parity. So you might have more downtime as you do parity rebuilds during this process. Also since you plan to replace two disks that means you'll be prompted to rebuild parity twice. Not sure if there is a better way to sequence this, but you can view this as two seperate tasks, 1) Converting from RFS to XFS 2) Replacing to hard drives.

 

If I were trying to do what you want to do, I would first replace the drives and then convert from XFS to RFS, but you can sequence this as well so that you replace drives and convert in alternating steps.

 

Hello all, I am currently in the process of moving my old Unraid 5 server to a new Unraid 6 build.  Key facts are I don't have a single spare drive that is equal to the largest drive in the old array (I simply have a couple of external 1TB drives) and I want to make use (of course) of XFS on the Unraid 6 server.

 

Is it possible for me to take my 3TB parity out of the old server and transition that to the new server, whilst still being able to access the old server and copy data to the external drives (array will be unprotected)?

Once I have cleared off a 2TB data drive from the old server, and place it in to the new one, do I need to pre-clear this again, or just re-format from the GUI as XFS?

 

Many thanks for the advice. :)

 

You can run without a parity disk for awhile and still have access to your data on array data drives, if you want to.

 

I'm not sure that pre-clearning has much value for you here since the disk has already been stress tested before. I'm not an expert on pre-clearning but I would think that you should be fine just re-formating it. However again I am not the best on pre-clearing so there might be some benifit I am missing here.

 

 

Link to comment

Hello all, I am currently in the process of moving my old Unraid 5 server to a new Unraid 6 build.  Key facts are I don't have a single spare drive that is equal to the largest drive in the old array (I simply have a couple of external 1TB drives) and I want to make use (of course) of XFS on the Unraid 6 server.

 

Is it possible for me to take my 3TB parity out of the old server and transition that to the new server, whilst still being able to access the old server and copy data to the external drives (array will be unprotected)?

Once I have cleared off a 2TB data drive from the old server, and place it in to the new one, do I need to pre-clear this again, or just re-format from the GUI as XFS?

 

Many thanks for the advice. :)

You say you are doing a new build. Will you be keeping the old build in service when the new one is done?

 

Personally, I would go ahead and invest in a couple of new 3 or 4 TB drives for the new build, but then I am made of money!? :o

 

You can do what you say but of course you would be taking some risks. Do you have backups?

 

A disk is only required to be clear if you are adding it to a new slot in a parity array; i.e., actually increasing the drive count. This is so parity will remain valid. If you don't have parity or haven't yet build parity, then the data drive doesn't need to be clear.

 

The other reason to preclear is to test the drive, but if it is already working well, and it has good SMART reports, then if it isn't required to be clear as above, then it's OK to just format.

Link to comment

Thanks for the replies.

 

You say you are doing a new build. Will you be keeping the old build in service when the new one is done?

No I will almost certainly get shot of it, too big and clunky.

 

Personally, I would go ahead and invest in a couple of new 3 or 4 TB drives for the new build, but then I am made of money!? :o

You can do what you say but of course you would be taking some risks. Do you have backups?

I have considered this, but without sounding tight, I refuse to pay more for WD Red drive than I did last year, which is the current price scenario here in Germany  >:(

Backups would only be made onto external drives.  Once the migration is complete, then I would have some smaller drives lying around which I shall make use of for the more important stuff.

 

A disk is only required to be clear if you are adding it to a new slot in a parity array; i.e., actually increasing the drive count. This is so parity will remain valid. If you don't have parity or haven't yet build parity, then the data drive doesn't need to be clear.

 

The other reason to preclear is to test the drive, but if it is already working well, and it has good SMART reports, then if it isn't required to be clear as above, then it's OK to just format.

Thanks, sounds sensible.  Just to be clear, popping an old Reiser formatted drive in to the new build won't kick off a parity build straight away will it?  I should be able to select the drive from the UI and then tell it that I want to format it as XFS?

Link to comment

...  Just to be clear, popping an old Reiser formatted drive in to the new build won't kick off a parity build straight away will it?  I should be able to select the drive from the UI and then tell it that I want to format it as XFS?

If parity isn't assigned then no parity will be built. If parity is assigned and you try to add a drive that isn't clear then unRAID will clear it for you so parity will remain valid. I think it will not actually do anything until you try to start the array, but I don't think you can format without starting, so better to just not assign parity until after the format.
Link to comment

You've got to different issues, replacing a disk with a bigger disk, and converting your file systems.

 

The thing to keep in mind is that if you rebuild from parity your array will rebuild that disks with the current filesystem, you can't switch file systems and rebuild at the same time. So what you will need to do is either convert the disk to XFS first then swap for the bigger disk, or swap for the bigger disk then convert to XFS.

 

If you've got space issues I think the second path makes more sense...

 

I think this is finally starting to sink in...

 

 

Your process does make sense, and will work, but as Trurl explained when you remove a data disk from the array you are going to invalidate parity and be prompted to rebuild your parity. So you might have more downtime as you do parity rebuilds during this process. Also since you plan to replace two disks that means you'll be prompted to rebuild parity twice. Not sure if there is a better way to sequence this, but you can view this as two seperate tasks, 1) Converting from RFS to XFS 2) Replacing to hard drives.

 

If I were trying to do what you want to do, I would first replace the drives and then convert from XFS to RFS, but you can sequence this as well so that you replace drives and convert in alternating steps.

 

So you are suggesting that I should just stop the array, and replace both 1 TB drives for 2 TB drives, and have the unRAID parity system "magically" rebuild them on to the new 2 TB drives.  These would then be RFS drives at this point.

 

If I take this approach, I'm not sure the best way to get to XFS.  I only have 4 slots on this controller card, and all are full (three data drives, one parity drive), thus the discussion of consolidating data from three data drives down to two as my first step.

 

Sorry for the basic questions, but as these are fairly lengthy processes, I figure I'd better get it right to try to minimize down time.

 

Thanks,

 

John

Link to comment

So you are suggesting that I should just stop the array, and replace both 1 TB drives for 2 TB drives, and have the unRAID parity system "magically" rebuild them on to the new 2 TB drives.  These would then be RFS drives at this point.

 

If I take this approach, I'm not sure the best way to get to XFS.  I only have 4 slots on this controller card, and all are full (three data drives, one parity drive), thus the discussion of consolidating data from three data drives down to two as my first step.

 

Sorry for the basic questions, but as these are fairly lengthy processes, I figure I'd better get it right to try to minimize down time.

 

Thanks,

 

John

Your quoting is a bit off, but I think this is the new content that I want to address.

 

First of all, you should preclear any new drives, even though nothing in the process I am about to outline requires cleared drives. The preclear is just for testing purposes. A disk only needs to be clear when adding it to a new slot. This is so parity will remain valid. When rebuilding, parity will remain valid anyway.

 

Also, it is very important to remember that formatting a drive is NEVER a part of the rebuild process. Don't make the mistake of replacing a drive with a new drive then formatting it.

 

Above, you said replace both and rebuild. It is important to remember that unRAID can only rebuild a single drive if all the other drives including parity are available for it to calculate the rebuild data. In other words, you cannot replace both and rebuild. You must replace one and rebuild it, then replace the other and rebuild it. It's not really magic, just (very simple) math. See this wiki for a better understanding of how parity works.

 

Also, rebuilding a disk will result in the same file system you had on the original disk.

 

As far as getting to XFS, the thing to remember is that when you change the file system of a disk, it will be formatted. So you only want to do that after you have moved or copied its data to other disks.

 

Here it is broken down into steps, but if you understand the above you could perhaps come up with other approaches that would maybe include more drives.

 

Let's call the drives disk1 and disk2, and they are both 1TB and will both be replaced (one at a time!) with 2TB.

 

1) Rebuild 1TB disk1 (ReiserFS) to a 2TB disk. This will give you a 2TB disk1 (ReiserFS) with more free space.

2) Move or copy all the data from 1TB disk2 (ReiserFS) to 2TB disk1 (ReiserFS).

 

Then when you are satisfied that all of the 1TB disk2 data is on the 2TB disk1 correctly, you can:

 

3A) Format 1TB disk2 (ReiserFS) to XFS, resulting in an empty 1TB disk2 (XFS). (You didn't need any of that data since you already put it on the other drive, right?)

4A) Rebuild empty 1TB disk2 (XFS) onto a 2TB disk, resulting in an empty 2TB XFS disk.

 

OR

 

3B) Rebuild 1TB disk2 (ReiserFS) to a 2TB disk, resulting in a 2TB disk2 (ReiserFS).

4B) Format 2TB disk2 (ReiserFS) to XFS, resulting in an empty 2TB disk2 (XFS). (You didn't need any of that data since you already put it on the other drive, right?)

 

So either way, you wind up with an empty 2TB disk2 (XFS).

 

5) Move or copy all of the data from the 2TB disk1 (ReiserFS) to the empty 2TB disk2 (XFS).

 

6) Format 2TB disk1 (ReiserFS) to XFS, resulting in an empty 2TB disk1 (XFS). (You didn't need any of that data since you already put it on the other drive, right?)

 

So, in the end, you wind up with both disk1 and disk2 as 2TB XFS. All of the original disk1 and disk2 data is on disk2 and disk1 is empty waiting for you to put other data onto from perhaps other disks so you can repeat the process.

 

I won't go into the details of how you actually do the copy/move. I did mine in mc. Others preferred rsync with verify. I think some may have even done it over the network with PC software.

 

Perhaps so many words it will only serve to further confuse.

Link to comment

 

So, in the end, you wind up with both disk1 and disk2 as 2TB XFS. All of the original disk1 and disk2 data is on disk2 and disk1 is empty waiting for you to put other data onto from perhaps other disks so you can repeat the process.

 

I won't go into the details of how you actually do the copy/move. I did mine in mc. Others preferred rsync with verify. I think some may have even done it over the network with PC software.

 

Perhaps so many words it will only serve to further confuse.

 

Wow.  No.  I get it now!  Thanks to you and gundamguy for taking the time to re-pour the words back into one ear, because clearly, they had gone in one ear, and drained out the other!

 

I see now it'll be a bit longer process than I had thought, but that's OK.

 

I really appreciate it.  Thanks, guys.

 

 

John

Link to comment

Same thing happens to me. Even though I've been using unraid for several years and work in the IT world you'd think I'd be an expert with unraid and all it's functions and abilities. NOT. It is actually a praise towards unraid in a way. Since I really never had issues with it, I never really needed "support" with it. Unraid has been the only thing I have ever purchased that I feel comfortable support is really only via forum message board. The knowledge and patience the moderators have here is truly amazing. I know they probably see the same questions over and over. Maybe there is a new 2015 FAQ for the new version, with just about every kind of question and answer? Would be really nice, can search for it. Can add XFS check filesystem information. I had to look that up on another site.

 

 

Link to comment

I know they probably see the same questions over and over. Maybe there is a new 2015 FAQ for the new version, with just about every kind of question and answer? Would be really nice, can search for it. Can add XFS check filesystem information. I had to look that up on another site.

 

Throughout the history of the wiki, the demand (for info) has far outstripped the supply.  Or put another way, there are far more users wanting information, than there are users willing to put it into the wiki.  It's almost completely a community effort, but volunteers have been few and far between.  If anyone reading this is interested in helping, please see Wiki editors needed.

Link to comment

I've started moving my backup server to XFS, following to the letter bjp999's instructions on the first page.  I'm on step 7, and the first disk is being copied to the destination, and I noticed something odd.

 

While most of the file are getting copied to the t folder, a few very recent files (along with new files that are being created) are going to a separate folder on the root:

 

/mnt/disk21/t/  <most of the files are going here>

/mnt/disk21/folder  <a few recent files are going here>

 

folder is a hidden share

 

Also odd, a /mnt/disk21/t/folder has been created, but Windows tells me I have no rights to access it when I try to browse the content.

 

Any ideas?  :)

Link to comment

I've started moving my backup server to XFS, following to the letter bjp999's instructions on the first page.  I'm on step 7, and the first disk is being copied to the destination, and I noticed something odd.

 

While most of the file are getting copied to the t folder, a few very recent files (along with new files that are being created) are going to a separate folder on the root:

 

/mnt/disk21/t/  <most of the files are going here>

/mnt/disk21/folder  <a few recent files are going here>

 

folder is a hidden share

 

Also odd, a /mnt/disk21/t/folder has been created, but Windows tells me I have no rights to access it when I try to browse the content.

 

Any ideas?  :)

disk21 is included in the folder share and something is writing to the share.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.