Jump to content

Best way to go from 4.5.4 to 6.0 for small server?


In2Photos

Recommended Posts

I currently have a small server (3 drives, 2TB of data) running the free version of 4.5.4 and it is full. I bought a 3TB drive to put in it, not realizing the drive size limit. So I figure it is time to upgrade. I waited for 6.0 to come out and was planning to do the following:

 

1. Copy my data from the server to some external drives attached to another computer.

2. Download 6.0 and install it on a new flash drive (current drive is a 256MB  :o).

3. Install the new 3TB disk in the machine. This will give me the 3TB for parity, 2 - 1.5TB disks for data, and I will probably not use the 4th disk (500GB) right now.

4. Once the server is up and running copy all the data back to the server.

 

Of course I will need to do follow all of the advice for a new drive and this will take some time, but it seems like the easiest, and maybe safest way to do this.

 

Thoughts?

Link to comment

The guide to update to v6 requires that you first upgrade to v5, but includes a link for upgrading from 4.7 to v5.

 

HOWEVER ... since you're (a) starting from an even earlier version than 4.7, and (b) only have 2TB of data ... it's by far the easiest to simply do as you've suggested ==> copy all your data to another location;  then create a new v6 system; and then copy the data back.  [No "migration" involved]

 

Before doing that, I'd buy a Basic license, so you don't have to deal with the trial's 30-day expiration.

 

Then just add the 3 drives you are planning to us [3TB for parity, 2 1.5TB's for data], and then copy your data to the new system.

 

Done :-)

 

Link to comment

Cant you just add the data disks without parity disk in unraid 6, start the array, see if all shares and data is there, and then add the new parity disk, and let it rebuild parity?

 

Yes, that would work just fine -- and is indeed easier.    There have been so many folks trying to convert their RFS disks to XFS (not really necessary) that I was thinking he'd want to format the disks in XFS ... and in that case just starting fresh is much simpler with so little data.

 

But if the file system isn't an issue, you're right -- that's by far the quickest and simplest way to move to v6.

 

Link to comment

Cant you just add the data disks without parity disk in unraid 6, start the array, see if all shares and data is there, and then add the new parity disk, and let it rebuild parity?

 

That sounds like it would be even faster! I've already backed up the 2TB of data, but if I don't have to write it back that would certainly be better!

Link to comment

Cant you just add the data disks without parity disk in unraid 6, start the array, see if all shares and data is there, and then add the new parity disk, and let it rebuild parity?

 

Yes, that would work just fine -- and is indeed easier.    There have been so many folks trying to convert their RFS disks to XFS (not really necessary) that I was thinking he'd want to format the disks in XFS ... and in that case just starting fresh is much simpler with so little data.

 

But if the file system isn't an issue, you're right -- that's by far the quickest and simplest way to move to v6.

 

I'm not sure what the difference is between RFS and XFS. I'll have to read up and see if I would want to convert.

Link to comment

Thanks for the help so far!

 

Anyone see the benefit of using the 500GB disk for cache? I don't write much to the server, it's mostly movie storage. I thought it might help with moving the data back to the server, but if I just move the data drives to V6 then I don't have to worry about writing the data back anyway.

Link to comment

RFS is the Reiser file system, which was used in UnRAID through v5, and is still supported in v6.  XFS is a newer file system that's now the preferred file system for v6.    There's really no reason to convert your older disks ... you can simply move them to the new array and be done with it.    As you add disks in the future, you'll likely want to format them with XFS, but there's no problem mixing the disk types in the system.

 

 

Link to comment

Anyone see the benefit of using the 500GB disk for cache? I don't write much to the server, it's mostly movie storage. I thought it might help with moving the data back to the server, but if I just move the data drives to V6 then I don't have to worry about writing the data back anyway.

 

I wouldn't bother => personally I prefer to know that the data I write to my UnRAID server is immediately fault-tolerant ... that is NOT the case when you use a cache drive [it's not fault-tolerant until it is later moved to the array by the Mover process].

 

Link to comment
  • 2 weeks later...

Yesterday I began the task of migrating to V6. Everything went really well. The new parity drive is in and parity sync has been done. So the "hard part" is over I guess.

 

Once the array was back up and running I needed to add the original 1.5TB parity drive back to the server as a data drive. I plugged it back in and booted up the server. I assigned the drive and began the clear process. I knew this would take some time but right now it has been running for about 14 hours and is only 20% complete. That seems like a really long time (about 70 hours to clear the entire drive). Does this seem right? Should it take that long?

Link to comment

Once the array was back up and running I needed to add the original 1.5TB parity drive back to the server as a data drive. I plugged it back in and booted up the server. I assigned the drive and began the clear process. I knew this would take some time but right now it has been running for about 14 hours and is only 20% complete. That seems like a really long time (about 70 hours to clear the entire drive). Does this seem right? Should it take that long?

That does seem a bit excessive - it is possible the disk is having issues.  I would have expected 1.5TB to only take a few hours.  You might want to go to the diagnostics option on the Tools menu to get a ZIP of logs and configuration files that you can attach to a post so others can peruse to see if any issue can be spotted.

 

Note that experienced users would not have assigned the drive at this point as the clear process takes the array offline while the clear is in progress and having the array offline for an extended period is seen as undesirable..  Instead they would use the pre-clear script from a console/telnet session to both carry out a confidence check on the drive and carry out the clearing process while keeping the array available.  If the pre-clear process completes with no issues reported then the disk can be added to the array and unRAID will recognise that the clear is not needed so the array is only down for a minute or so while you do the assignment and then restart the array.  At some point this pre-clear capability will probably be part of the core unRAID release but that is not yet the case.

Link to comment

Once the array was back up and running I needed to add the original 1.5TB parity drive back to the server as a data drive. I plugged it back in and booted up the server. I assigned the drive and began the clear process. I knew this would take some time but right now it has been running for about 14 hours and is only 20% complete. That seems like a really long time (about 70 hours to clear the entire drive). Does this seem right? Should it take that long?

That does seem a bit excessive - it is possible the disk is having issues.  I would have expected 1.5TB to only take a few hours.  You might want to go to the diagnostics option on the Tools menu to get a ZIP of logs and configuration files that you can attach to a post so others can peruse to see if any issue can be spotted.

 

Note that experienced users would not have assigned the drive at this point as the clear process takes the array offline while the clear is in progress and having the array offline for an extended period is seen as undesirable..  Instead they would use the pre-clear script from a console/telnet session to both carry out a confidence check on the drive and carry out the clearing process while keeping the array available.  If the pre-clear process completes with no issues reported then the disk can be added to the array and unRAID will recognise that the clear is not needed so the array is only down for a minute or so while you do the assignment and then restart the array.  At some point this pre-clear capability will probably be part of the core unRAID release but that is not yet the case.

 

Thanks for the info! My server is mostly a media server for movies and we don't access it that much so being offline isn't a big deal. Or at least it wasn't when I figured it would only be available overnight! Otherwise I probably would have gone the preclear route. I will definitely use that for future disks. I have attached the diagnostics zip file if anyone wants to take a look.

mytower-diagnostics-20150608-1417.zip

Link to comment

I don't see it mentioned in this thread so far, but once everything is going again, you will have to go to Tools - New Permissions to get all your files/folders ownership/permissions set to conform with unRAID v5/6.

 

Thanks for the reply! Yes, I saw that mentioned in another thread. There was a few moments of panic when I couldn't access the shares from another machine. But running permissions solved that!

Link to comment

I just finished a post discussing the question of skipping versions to upgrade to v6, please see Upgrade MB and OS from 4.7 to 6 questions.

 

Any users who do skip any upgrades - please report back on what methods you used, and what problems you had, so we can better advise other users.

 

Sure thing! Here is how the upgrade went for me.

 

1. Took screen shots of all settings, then shut down the 4.5.4 server and unplugged the flash drive. I also unplugged the parity drive.

2. Downloaded V6 and followed the instructions to load the OS onto my new flash drive.

3. Booted to V6, had to go into the BIOS and set the flash drive as the boot device. It was listed as a HDD and not a removable device.

4. Once booted I clicked the link to purchase a registration key (I was using the free version previously).

5. I shut down the server and moved the flash drive to my Windows machine to copy the key file to the config folder.

6. Moved the flash drive back to the server and reset BIOS settings, then booted the server.

7. Next I assigned the data drives and checked the user shares, then entered in all the other various settings.

8. Added the new 3TB drive as parity and ran a parity sync.

9. Tried to access my data and realized that I didn't run permissions. Ran permissions.

 

 

At this point it seems like all worked as planned and went better than expected.

 

10. Added the 1.5TB original parity drive to use a a data drive and it is taking a long time to clear.

Link to comment

The SMART report for disk WDC WD15EADS-00S2B0 shows that it is in trouble and is likely to fail at any moment.  It has a FAILING NOW value for reallocated sectors.  If any SMART attribute is marked as FAILING NOW the disk should be retired ASAP.  I am assuming that is the one you are trying to add so that could explain why it is taking so long.

Link to comment

I just finished a post discussing the question of skipping versions to upgrade to v6, please see Upgrade MB and OS from 4.7 to 6 questions.

 

Any users who do skip any upgrades - please report back on what methods you used, and what problems you had, so we can better advise other users.

 

Sure thing! Here is how the upgrade went for me.

 

1. Took screen shots of all settings, then shut down the 4.5.4 server and unplugged the flash drive. I also unplugged the parity drive.

2. Downloaded V6 and followed the instructions to load the OS onto my new flash drive.

3. Booted to V6, had to go into the BIOS and set the flash drive as the boot device. It was listed as a HDD and not a removable device.

4. Once booted I clicked the link to purchase a registration key (I was using the free version previously).

5. I shut down the server and moved the flash drive to my Windows machine to copy the key file to the config folder.

6. Moved the flash drive back to the server and reset BIOS settings, then booted the server.

7. Next I assigned the data drives and checked the user shares, then entered in all the other various settings.

8. Added the new 3TB drive as parity and ran a parity sync.

9. Tried to access my data and realized that I didn't run permissions. Ran permissions.

 

 

At this point it seems like all worked as planned and went better than expected.

 

10. Added the 1.5TB original parity drive to use a a data drive and it is taking a long time to clear.

 

Thank you!  I suspect #3 happened because you unplugged a drive.  Some BIOS's like to be *helpful* and after any drive change, set the boot order to what it thinks would be right for you!  Not helpful at all!

 

A question, did you have any added users before and passwords?  What do they look like now?

Link to comment

I just finished a post discussing the question of skipping versions to upgrade to v6, please see Upgrade MB and OS from 4.7 to 6 questions.

 

Any users who do skip any upgrades - please report back on what methods you used, and what problems you had, so we can better advise other users.

 

Sure thing! Here is how the upgrade went for me.

 

1. Took screen shots of all settings, then shut down the 4.5.4 server and unplugged the flash drive. I also unplugged the parity drive.

2. Downloaded V6 and followed the instructions to load the OS onto my new flash drive.

3. Booted to V6, had to go into the BIOS and set the flash drive as the boot device. It was listed as a HDD and not a removable device.

4. Once booted I clicked the link to purchase a registration key (I was using the free version previously).

5. I shut down the server and moved the flash drive to my Windows machine to copy the key file to the config folder.

6. Moved the flash drive back to the server and reset BIOS settings, then booted the server.

7. Next I assigned the data drives and checked the user shares, then entered in all the other various settings.

8. Added the new 3TB drive as parity and ran a parity sync.

9. Tried to access my data and realized that I didn't run permissions. Ran permissions.

 

 

At this point it seems like all worked as planned and went better than expected.

 

10. Added the 1.5TB original parity drive to use a a data drive and it is taking a long time to clear.

 

Thank you!  I suspect #3 happened because you unplugged a drive.  Some BIOS's like to be *helpful* and after any drive change, set the boot order to what it thinks would be right for you!  Not helpful at all!

 

A question, did you have any added users before and passwords?  What do they look like now?

 

Yes, anytime I unplugged the flash drive I had to go back into the BIOS.

 

I did not have any users or passwords before.

Link to comment

The SMART report for disk WDC WD15EADS-00S2B0 shows that it is in trouble and is likely to fail at any moment.  It has a FAILING NOW value for reallocated sectors.  If any SMART attribute is marked as FAILING NOW the disk should be retired ASAP.  I am assuming that is the one you are trying to add so that could explain why it is taking so long.

 

Thanks! Guss it's time to buy a new drive!

Link to comment

10. Added the 1.5TB original parity drive to use a a data drive and it is taking a long time to clear.

 

Not really relevant, since you're going to replace it anyway due to the S.M.A.R.T. issues => but FYI clearing a drive DOES take a LONG time (many hours).  That's normal.

 

That's why many of us use the "pre-clear" utility to do the clearing "outside" of UnRAID  [On the same system, but it runs directly under Linux so you can still run UnRAID and use your array while the clearing process is underway].    Once you've pre-cleared a drive, you can add it to your UnRAID array, and it will then only require formatting (a very quick process).

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...