Jump to content

Upgrading to UnRAID v6


Recommended Posts

  • 1 month later...
  • 2 months later...

We STRONGLY RECOMMEND upgrading to the latest v6!

 

* ALL versions prior to the current v6 have known security vulnerabilities, making them vulnerable to attack by an infected machine on your local network.

 

* LimeTech has been seriously considering for some time moving ALL older versions to End-Of-Life, all versions prior to v6.

 

* Maintenance and support is becoming harder and scarcer as time goes by.  There are fewer and fewer people familiar with your version able or willing to help.

 

* Numerous bugs have been fixed.  You may be running without issue now, but something can still go wrong, or you may try something different, and run into an issue long fixed in newer versions.

 

* You're missing out on a lot of new features, and perhaps better performance too.

 

* One very important improvement is Notifications, makes it a better NAS!  The system will inform you immediately of problems.

 

* After upgrading, you will still be able to run all of your favorite applications, and they may even run better, because in v6 there are multiple ways to run them.

 

Please check out the Upgrading to UnRAID v6 guide.

Link to comment
  • 1 year later...
  • 5 years later...
  • 1 year later...

Ok, this may seem a bit odd, but most of my UNRAID servers are way back on 4.5.3  They have been rock stable forever, with the exceptions of power outages, and a failed motherboard, and a few power supplies here and there over the years.

 

I am finally done with my tests on 6.x... and am ready to commit to retiring and upgrading from the 4.x world...  But, I see no link that actually takes me to any upgrade guide now.  Am I missing something?

Link to comment

Doesn't look like that is going to help much from your very old version.

 

First thing, your hardware must support 64-bit.

 

How big is your flash drive, and how much RAM do you have?

 

Any RAID controllers involved?

 

Link to comment
17 hours ago, trurl said:

Doesn't look like that is going to help much from your very old version.

 

First thing, your hardware must support 64-bit.

 

How big is your flash drive, and how much RAM do you have?

 

Any RAID controllers involved?

 

The oldest one is now running a Core 2 Duo (64-bit) E7200, with 2 GB of RAM, expandable up to 4 GB of RAM.

 

It is on a 4GB Flash drive

 

SATA drives are running on:

1 ea - SuperMicro SAT2-MV8 (running in PCI mode) (8-drives)

1 ea - Promise FASTTRAK S150 TX4 PCI card (4-drives)

  The rest of the drives are on the Motherboard SATA ports (4-drives)

 

Link to comment
15 hours ago, electron286 said:

The oldest one is now running a Core 2 Duo (64-bit) E7200, with 2 GB of RAM, expandable up to 4 GB of RAM.

You really need to upgrade this to 4GB if you want to use it with the current v6 (which is now the recommended minimum).   It will probably run most basic functionality with 2GB but certain features (such as OS upgrade via GUI) may well fail to work correctly.

  • Like 1
Link to comment
  • 2 weeks later...

It looks like I will just need to copy/move the files over to new/re-formatted drives on the array for use under V6.x.  Not fun, but needed it looks like.

 

it also seems it may be time to consider a motherboard/SATA controller upgrade, (but probably after I migrate to V6.x I think).  8 or 16 port SATA controllers on PCI-e will definitely increase parity check/rebuild speed.

 

I was trying to delay any big changes since the hardware has been working so well for so many years...  but looking at options now, I am not really sure.

Link to comment

various system component speeds - - I made this little breakdown of various system components to assist my decision making on my upgrade.  depending on actual use, this can be used to help decide where to focus on upgrades on hardware...  I Hope someone else may also benefit from it.

 

Newer SSDs, (SATA and NVME) while not listed here, are so fast overall for actual needs, the faster options only really benefit in parity check and array rebuilds.  Read/write speeds of 500/400MB/sec per device are screaming fast overall for most unraid uses.  It may be very beneficial to performance however to still use a higher end and faster drive for Parity and/or cache drives.  But watch the caveat, some faster drives also have reduced MTBF ratings...

 

Here is the speed breakdowns for various components, using general best case data on a well designed system:

 

PCI-E to replace old PCI or PCI-X SATA controller cards - - 
and - Speed comparisons of various system components

 

SATA - 
SATA 1 - 1.5 Gb/sec = 150 MB/sec
SATA 2 - 3.0 Gb/sec = 300 MB/sec
SATA 3 - 6.0 Gb/sec = 600 MB/sec

 

Hard Drives
5400 RPM up to ? - up to 180-210 MB/sec - 
7200 RPM up to 1030 Mb/sec disc to buffer - up to 255 MB/sec - 204 MB/sec sustained


NETWORK limits 94% efficiency- - 
10Mb = 1.18 MB/sec
100Mb = 11.8 MB/sec
1Gb = 118 MB/sec
2.5Gb = 295 MB/sec
10Gb = 1180 MB/sec


PCI 32-bit 33 Mhz
133.33 MB/sec = 8 drives = 16.66 MB/sec per drive!
133.33 MB/sec = 4 drives = 33.33 MB/sec per drive!
133.33 MB/sec = 2 drives = 66.66 MB/sec per drive!

 

PCI 32-bit 66 Mhz
266 MB/sec = 8 drives = 33.25 MB/sec per drive!
266 MB/sec = 4 drives = 66.5 MB/sec per drive!
266 MB/sec = 2 drives = 133 MB/sec per drive!

 

PCI-X 64 bit 133 Mhz
1072 MB/sec = 8 drives = 134 MB/sec per drive!
1072 MB/sec = 4 drives = 268 MB/sec per drive!
1072 MB/sec = 2 drives = 536 MB/sec per drive!


PCI-E 3.0 1 lanes = 1GB/sec = 16 drives = 62.5 MB/sec per drive!
PCI-E 4.0 1 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive!

PCI-E 3.0 1 lanes = 1GB/sec = 8 drives = 125 MB/sec per drive!
PCI-E 4.0 1 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive!

PCI-E 3.0 1 lanes = 1GB/sec = 4 drives = 250 MB/sec per drive!
PCI-E 4.0 1 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive!


PCI-E 3.0 2 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive!
PCI-E 4.0 2 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive!

PCI-E 3.0 2 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive!
PCI-E 4.0 2 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive!

PCI-E 3.0 2 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive!
PCI-E 4.0 2 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive!


PCI-E 3.0 4 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive!
PCI-E 4.0 4 lanes = 8GB/sec = 16 drives = 500 MB/sec per drive!

PCI-E 3.0 4 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive!
PCI-E 4.0 4 lanes = 8GB/sec = 8 drives = 1000 MB/sec per drive!

PCI-E 3.0 4 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive!
PCI-E 4.0 4 lanes = 8GB/sec = 4 drives = 2000 MB/sec per drive!

Link to comment

Cheap low power motherboard NAS array options...  Yes port multipliers divide bandwidth.  But when planned out properly can be very useful for building an array.  The ASMedia port multipliers do not seem as reliable so far when compared with the JMicron devices.  Also there are some very attractively priced N100 based boards now with JMicron PCIe - SATA controllers onboard.  It is also best to keep to one family of controller/multiplier when cascading devices to maintain the best interoperability and reliability.  I am starting tests with various ASMedia PCIe to SATA controllers for use to upgrade older PCI based systems.  I am also looking at the N100 w/JMicron route for testing, which seems a better option for larger arrays.

 

Obviously there is less bandwidth to work with than the LSI 12Gb SAS/SATA dual link via LSI Port multipliers.  But, for a new low power system, the N100 option looks very attractive.  And if not pushed to limits causing bandwidth throttling, (see option 2 below), with SPINNING hard drives, the new cheaper option looks like it should be able to even do parity checks at comparable speeds to that of the LSI build! (possibly limited more by the new CPU)

 

N100 based MB - NAS bandwidth calculations:

 

w/ JMB585 PCIe-SATA Bridge Controller - PCIe Gen 3 x2 to x5 SATA III 6Gb/sec

 

ADD - JMB575 Port multipliers - 1 to 5 ports SATA 6Gb/s, Cascaded mode: up to x15 drives from 1 SATA port

Cascaded mode: up to x75 drives from 5 JMB585 SATA ports!

 

NOTE: 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth

 

OPTION 1

PCIe Gen 3 x2 = (PCI-E 3.0 2 lanes = 2GB/sec) = 5 drives/ports = 400 MB/sec per drive!

5ea JMB575 multipliers, 1 per port from JMB585 ports

25 ports total - 

SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth

400 MB/sec per port in FULL USAGE AVERAGED = 80 MB/sec per drive averaged over 25 drives

 

OPTION 2

3ea JMB575 multipliers (1st level), 1 per port from 3 JMB585 ports

15ea JMB575 multipliers (2nd level), 1 per port from 1st level JMB575 multipliers

75 ports total - 

SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth

1st level non-limiting - 600 MB/sec (200 MB unused bandwidth from PCIe)

2nd level 600 MB/sec per port in FULL USAGE AVERAGED = 120 MB/sec per drive averaged over 75 drives

 

FULL USAGE = Full array activity - Parity check/build, Drive rebuild


 

Link to comment
  • 1 month later...
Posted (edited)

Well, as testing progresses on multiple controllers for my upgrades.  I have realized that cheap and low power is also relative.  For a small array of drives, the JMicro and ASMedia controllers, and even expanders can work well, and be low in power consumption.  But with a larger array, the LSI 9207 and clones, can have lower power consumption while still also providing a greater array data throughput do to the ability to use 8 PCIe lanes.  The JMB585 can, from my tests, out perform the LSI 9207 for small arrays, as long as no more than 3 of the 5 ports are used on the JMB585, this includes using JMB575 multipliers, using no more than 3 ports of each JMB575.  This would limit the high performance use of the JMB585/JM575 combination to no more than 9 drives to prevent significant data throttling.  Speeds will decrease of course with the full array active, such as parity build/check and drive reconstruction activities.  So with 9 spinning drives a data rate of near 200MB/s per drive can be seen with full array activity.  Similar number can be had using the ASmedia ASM1166 controller with the JM575 Multiplier.  However if more than 3 drives are to be used, especially if higher performance spinners or SSDs are in use, the LSI 9207 performance exceeds the PCIe 2-lane limitation of the JMB585 and ASM1166.  And if adding multipliers, to the cheaper controllers, the LSI 9207 can actually use less power than a multi-controller/multiplier setup, all without the extra cables and boards (placement) mess.  What is interesting is the newer 9300 not only offers greater potential speeds than the 9207, but in some tests it also uses less power being based on a newer chip/silicon.

 

The 9207 and the 9300, (and clones and copies), are both limited to PCIe 3.0 8-lane speeds.  The 9207 is however further bottlenecked with it's 6Gb SAS/SATA links however, so the PCIe bus is not normally expected to be the limiting factor in use.  The 9300 however with it's 12Gb SAS / 6Gb SATA links can actually be limited in overall speed by the PCIe 3.0 8-lane bus, especially if adding SAS Expander(s) to the controller for a larger array.  I have pretty much completed my testing of SATA/SAS Controllers and associated Multipliers and Expanders now, but need to compile all the numbers to make a readable presentation.

 

Quick summary however for overall speed oriented arrays, the JMB585 and ASM1166 2-lane PCIe 3.0 to SATA controllers work great, and a system could be loaded with multiples to get a fast and reliable array using no more than 3 ports of each controller.  Using more ports, performance decreases, which if using spinners, may not be noticeable at all.  Adding SATA PORT multipliers I have been unable to see any stability issues as long as NO MORE than 3 ports per controller or multiplier are used.  However the performance and ease of build using SAS Expanders quickly makes the LSI 9207/9300 the better option, and will at times consume less power than the SATA Controller/Multipliers option.

 

Next up - 2.5 Gb Ethernet controllers, Realtek vs Intel.  Initial testing indicates they are about equal, both in CPU usage and data throughput.  But the Intel options seem to be a bit more efficient at massive small file transfers.  However, the 10 Gb NICs and Switches now have some very affordable options, available on the market, so it may be time to look more seriously at 10 Gb instead of 2.5 Gb, or looking at a hybrid 10 Gb backbone with select 2.5 Gb drops (or lower), to some clients.

Edited by electron286
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...