unRAID Server Release 6.0.0-x86_64 Available


Recommended Posts

I dunno Gary, I think that's overstated.  Most people presently using unraid are already tech-savvy, especially if they got to point of installing v5 plugins.  There will be exceptions to prove the rule, but I'm thinking most people, while possibly irritated, won't find it too difficult to convert.  For those who don't read anything, boot up, and say, "wtf happened to all my plugins?", the plugin will allow them to 'undo' the upgrade and go back to v5.

 

So when do you think you will have the plugin released, I assume before unRAID 7 that you just announced right :)

 

I think everyone should hold their horses a bit.  I think Tom was saying that after the version 6.X.X series of release (how ever many releases that may be) has run their course over the several ( perhaps??) years, and version 7 is released that your Docker containers from version 6.X.X will work and the upgrade from 6.X.X to 7.0.0 will be handled by the plugin manager.  (In my wildest speculation, I would envision that version 7 will be the first to have dual parity.)

Link to comment
  • Replies 313
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

... the big players in the OS' arena have all released versions that are incompatible with earlier versions at some stage in their histories.

 

True, but I don't think he degree of incompatibility was as significant ... e.g. Windows has always had a "compatibility mode" for previous versions; and when they made a jump to a version that was fairly significantly incompatible with software from TWO versions earlier (i.e. Windows 7 vis-à-vis XP) they included a special mode (XP Mode) that allowed those earlier programs to still run.

 

In addition, Microsoft has, for several versions, provided a "compatibility checker" that you could run BEFORE an upgrade that would list both any hardware issues AND all software that was installed by known to have compatibility issues with the new version.

 

Link to comment

... the big players in the OS' arena have all released versions that are incompatible with earlier versions at some stage in their histories.

 

True, but I don't think he degree of incompatibility was as significant ... e.g. Windows has always had a "compatibility mode" for previous versions; and when they made a jump to a version that was fairly significantly incompatible with software from TWO versions earlier (i.e. Windows 7 vis-à-vis XP) they included a special mode (XP Mode) that allowed those earlier programs to still run.

 

In addition, Microsoft has, for several versions, provided a "compatibility checker" that you could run BEFORE an upgrade that would list both any hardware issues AND all software that was installed by known to have compatibility issues with the new version.

 

although giving it further thought whilst brushing my teeth a minute ago, V6 is the first unraid version that can be considered being more of an OS than just a NAS with unsupported plugins in previous versions.

Information should be and is given to upgraders that plugins aren't going to work in the new version, but a line has to be drawn somewhere marking this as the first of the OS versions.

Link to comment

I think the real problem is that the terminology that we have been using is all wrong.  We are not really updating or up grading our servers in the manner that we are accustom to with Windows, Linux or the Apple OS's. 

 

We are installing an brand new unRAID OS whose NAS functions are identical to the older 4.7 and 5.X versions of unRAID and , at the same time, won't run any software packages that ran with the earlier versions.  Repeating that same thought in slightly different words--- the main point is that unRAID version 6 will basically run nothing that worked with these early versions.

 

If you installed any plugins or add-ons to provide additional features, you are going to have to find replacements for all of those.  PLus, the perferred way is not to use plugins to provide these functions but to use Docker Containers and VMs! 

 

All of this is a big step and many folks are hoping that there is a magic wand that can be waved in the air and have everything working as it did with the earlier versions.  That is simply not going to happen...

 

the big players in the OS' arena have all released versions that are incompatible with earlier versions at some stage in their histories.

 

That was the exception and not the rule.  I know that I installed Office 2000 (She had the license for it) on my Wife's Win7 computer when I built it.  I can recall that other people installed Office 97 on Win 7 machines.  Upgrading from one version of the OS to the next was often done by 'over-laying' the OS and having most (not all) applications run.  (Granted, this often resulted hodge-podge and a clean install of the OS and the old applications was often preferable.)  The main point being that old user programs often simply worked with the new OS.  The user wasn't sent scampering about looking for a new program/application to do the same task. 

 

The big thing with unRAID version 6 is that a decision was made very early in its development to completely leave out all 32 bit support and make it a 64 bit only OS.  There was considerable debate about this at the time  Of course, unRAID 5 and earlier versions used a 32 bit OS.  This  meant that any plugin that required any 32 support would not run at all.

 

I do not really think the decision was a bad one.  The one consequence is that everyone who was using plugins would now have to find a equivalent replacement.  As version 6 evolved, the Docker Containers and VM's were developed to essentially eliminate the need for most plugins.  These solutions eliminated many other problems with dependency conflicts.  As a result, unRAID will be a more rugged and reliable NAS server while still providing the user with even more options to use with ever increasing more powerful hardware.  It wouldn't surprise me to read of one of you putting in a $700 GPU into your server to play the latest GPU intensive game. 

Link to comment

...  I think Tom was saying that after the version 6.X.X series of release (how ever many releases that may be) has run their course over the several ( perhaps??) years, and version 7 is released that your Docker containers from version 6.X.X will work and the upgrade from 6.X.X to 7.0.0 will be handled by the plugin manager.  (In my wildest speculation, I would envision that version 7 will be the first to have dual parity.)

 

Agree -- it was by no means an announcement of v7  :) :)

 

I think he's simply saying that if you restrict your add-ons to Dockers and VMs that these will move directly to any new release with no changes => so if folks will do that there won't be any issue with future upgrades  :)

Link to comment

...  I think Tom was saying that after the version 6.X.X series of release (how ever many releases that may be) has run their course over the several ( perhaps??) years, and version 7 is released that your Docker containers from version 6.X.X will work and the upgrade from 6.X.X to 7.0.0 will be handled by the plugin manager.  (In my wildest speculation, I would envision that version 7 will be the first to have dual parity.)

 

Agree -- it was by no means an announcement of v7  :) :)

 

I think he's simply saying that if you restrict your add-ons to Dockers and VMs that these will move directly to any new release with no changes => so if folks will do that there won't be any issue with future upgrades  :)

 

Ding! Ding! Ding! Ding! Ding!

Link to comment

Jon, the considered reply is appreciated.

 

All plugins that are added to unRAID post-installation are supported by the community exclusively.  If the community doesn't wish to support manual installations for your needs, I think you're probably out of luck.  Keep in mind, you're asking others to go through more work to satisfy your desire of eliminating egress to your server.  Again, I understand the security-conscious mindset, but it's a choice and this choice comes with a trade-off:  security over ease-of-use.

 

For clarity, are you saying that the unRAID team doesn't support Dynamix, or the plugins for Dynamix (or both?).

 

We knew there would be at least one person out there that would run into this potentially, but because it's such an uncommon use case, diverting resources to documenting this for you is just not going to happen anytime soon.

 

No problem, as noted, I assumed this would be low priority.

 

My history is in enterprise IT and network security (hence my over-zealous network configuration).  All the production quality networks I've worked on implement egress filtering.  If you guys want to target unRAID to business users, it's worth documenting the network requirements for unRAID (not the plugins).

 

Anyway, love the product, and the upgrade to v6, cheers to you and your team!

Link to comment

Yes, if you have a backup of your v5 config it's a matters of 2-3 minutes to get yourself back where you were.

 

True if you're simply using UnRAID as a NAS => decidedly NOT true if you have plugins that you have to re-install as Dockers; configure; etc.

 

 

The point was it is 2-3 minutes to get v5 back up and running exactly as it was if you've backed it up before playing with v6.

Link to comment

I already gave the limetech staff a shout out on twitter... but after using v6 for a few days now, I'm even more impressed...

 

I'm running 6 docker containers (including Plex Media Server), a VM (with 2 threads and 8GB of RAM allocated), shifting data between hard drives (so I can go to brtfs from reiserfs), and the system isn't missing a beat.  WebUI is super responsive...

 

Only thing left for me to do is to start preclearing a drive...

 

The one issue I got is somehow my permissions got messed up on my cache drive (resulting in less than ideal behavior for Plex and Sonarr), rerunning the new permission utility fixed everything right away.

 

I hope nobody here is on the fence about upgrading.

Link to comment

You are brave to try btrfs with your data. I use it only on cache and use xfs on the data ones. Btrfs still not as mature although the checksum and snapshots is fun.

 

I already gave the limetech staff a shout out on twitter... but after using v6 for a few days now, I'm even more impressed...

 

I'm running 6 docker containers (including Plex Media Server), a VM (with 2 threads and 8GB of RAM allocated), shifting data between hard drives (so I can go to brtfs from reiserfs), and the system isn't missing a beat.  WebUI is super responsive...

 

Only thing left for me to do is to start preclearing a drive...

 

The one issue I got is somehow my permissions got messed up on my cache drive (resulting in less than ideal behavior for Plex and Sonarr), rerunning the new permission utility fixed everything right away.

 

I hope nobody here is on the fence about upgrading.

Link to comment

You are brave to try btrfs with your data. I use it only on cache and use xfs on the data ones. Btrfs still not as mature although the checksum and snapshots is fun.

 

I already gave the limetech staff a shout out on twitter... but after using v6 for a few days now, I'm even more impressed...

 

I'm running 6 docker containers (including Plex Media Server), a VM (with 2 threads and 8GB of RAM allocated), shifting data between hard drives (so I can go to brtfs from reiserfs), and the system isn't missing a beat.  WebUI is super responsive...

 

Only thing left for me to do is to start preclearing a drive...

 

The one issue I got is somehow my permissions got messed up on my cache drive (resulting in less than ideal behavior for Plex and Sonarr), rerunning the new permission utility fixed everything right away.

 

I hope nobody here is on the fence about upgrading.

Probably should carry this discussion elsewhere, but the bit rot protection is a really really nice feature considering how many TBs of data I have.

Link to comment

Using 5.06 on LimeTechs original "MD-1500/LL" and "MD-1510" hardware (2 machines) running 3 plugins each (SSH, mySQL, BTsync)

 

Will these 64bit-capable systems, with limited CPU power and RAM, support these plugins running in a docker container or should I stay with 5.06 running the old plugins?

 

Thanks.

 

Link to comment

Using 5.06 on LimeTechs original "MD-1500/LL" and "MD-1510" hardware (2 machines) running 3 plugins each (SSH, mySQL, BTsync)

 

Will these 64bit-capable systems, with limited CPU power and RAM, support these plugins running in a docker container or should I stay with 5.06 running the old plugins?

 

Thanks.

 

ssh is now part of the core system. So long as there are docker plugins for the other two (and I believe there are) then you should be fine. Docker isn't virtualisation so doesn't particularly come with any cpu overhead (not entirely true as you need some cpu for the docker framework and daemon, but on a per process basis this is the case).

Link to comment

I also think Tom need to find a way to increase parity speed? Possibly using CPU parity calculating instruction. I think this can be done via "Parity flag", see https://en.wikipedia.org/wiki/Parity_flag

My experience is that parity speed (except on very low power CPUs) is determined by disk hardware such as transfer speeds/rotational speeds and not by the CPU time involved.  Another limiting factor can be bus speed if a disk controller is plugged into a port that has a lower bus speed than the controller is optimised for.  You speed it up by getting faster disk controllers and/or faster disks.

Link to comment

I also think Tom need to find a way to increase parity speed? Possibly using CPU parity calculating instruction. I think this can be done via "Parity flag", see https://en.wikipedia.org/wiki/Parity_flag

 

The computational overhead to do a simple XOR of the bits to compute parity is very low.  The time to update parity is effectively entirely due to the speed of the disks.  It takes 4 disk operations to do the update ... a read of the current value of the data disk; read of the current corresponding parity on the parity disk; an update of both of those sectors; and then writing them both.  There's no way to speed this up short of getting faster disks unless the method of updating is changed to what has been referred to as "Turbo Write".

 

Turbo Write requires ALL disks be spun up to do the writes.  Instead of TWO operations on each of the two involved disks, a write could be done with a single read on ALL disks except parity; and a single write to parity ... i.e. ONE operation per disk.  This would eliminate waiting for a rotation of the disks to complete the operation.

 

This was actually implemented for a while for testing, and although it worked very well, it was NOT included in v6.    It IS on the roadmap for the future ...

This feature is being moved to a later release.

(Quoted from here: http://lime-technology.com/forum/index.php?topic=34521.msg375936#msg375936 )

 

 

I think next version of unRAID is dual parity must have.

 

While I, and many others, would love to have this feature, it is not yet on the roadmap for any near-term releases (i.e. 6.1 or 6.2).  It IS listed in the "Unscheduled" section, so it's at least on the list  :)

http://lime-technology.com/forum/index.php?board=63.0

 

Note, by the way, that depending on the specific implementation used to provide dual fault tolerance, the computational overhead will rise appreciably compared to what's required to compute a simple XOR'd parity bit.  So depending on the CPU, this may actually make writes to the protected array SLOWER  :)  [but not by an appreciable amount as long as you have a reasonably good CPU]

 

 

 

Link to comment

Turbo Write requires ALL disks be spun up to do the writes.  Instead of TWO operations on each of the two involved disks, a write could be done with a single read on ALL disks except parity; and a single write to parity ... i.e. ONE operation per disk.  This would eliminate waiting for a rotation of the disks to complete the operation.

 

Hopefully this will be an option. One of the appeal for unraid to me is the drive spin down maximisation.

 

Happy to have a parity write penalty as a result - and mitigate as best I can with the cache drive.

 

Appreciate others will have different needs but hopefully this won't be an enforced change.

Link to comment

Turbo Write requires ALL disks be spun up to do the writes.  Instead of TWO operations on each of the two involved disks, a write could be done with a single read on ALL disks except parity; and a single write to parity ... i.e. ONE operation per disk.  This would eliminate waiting for a rotation of the disks to complete the operation.

 

Hopefully this will be an option. One of the appeal for unraid to me is the drive spin down maximisation.

 

Happy to have a parity write penalty as a result - and mitigate as best I can with the cache drive.

 

Appreciate others will have different needs but hopefully this won't be an enforced change.

 

If you read the discussion threads r.e. this feature, it's very clear it will be an optional feature if it's implemented.    You're not alone in liking the fact that only 2 drives have to spin up to do a write.

 

Link to comment

Using 5.06 on LimeTechs original "MD-1500/LL" and "MD-1510" hardware (2 machines) running 3 plugins each (SSH, mySQL, BTsync)

 

Will these 64bit-capable systems, with limited CPU power and RAM, support these plugins running in a docker container or should I stay with 5.06 running the old plugins?

 

Thanks.

 

ssh is now part of the core system. So long as there are docker plugins for the other two (and I believe there are) then you should be fine. Docker isn't virtualisation so doesn't particularly come with any cpu overhead (not entirely true as you need some cpu for the docker framework and daemon, but on a per process basis this is the case).

 

What are the specs of the systems? I'm running 5-6 dockers and have 2 VM's setup on my system that runs on a Celeron!

Link to comment

It would seem that the new dhcp / timeout option that was added to unRAID late in the game is affecting my pfSense VM. I wrote my own powerdown start - S00.sh script to start my VM's in an order from 1 - 4 (pfSense starts first). It would seem that with the new dhcp logic it tries to connect to the network for 60 seconds and fails (because my pfSense VM has not started) and then starts unRAID and gives br0 a random IP address and then finally starts the VM's but it's too late by that time. I am assuming that even if I set the pfSense VM to autostart I am going to have the same issue? Does anyone have any thoughts on how to get my pfSense VM to start before br0 tries to get an IP address? I have pfSense assign unRAID a static IP address.

Link to comment

Not so smooth an update here.  Previously running RC6, with a Pro key.

 

Before update made copy of /boot/config folder and labeled it differently.

 

Updated both UnRAID to v6.0.0 and Unassigned Devices to version released today through the plugin update button.

 

Reboot with powerdown script via SSH, (powerdown -r)

 

Log in via SSH.  Hmmm, authorized keys are not working, asks for my password.  That's odd.

 

Tower/Main reports this:

 

Thank you for trying unRAID Server OS!

 

Your server will not be usable until you download a registration key....

 

Instead of selecting free trial, which might have worked for now, I went back to the command line and rebooted again with powerdown -r.  Now I can't login with SSH:

 

ssh: connect to host xxx.xx.xx port 22: Network is unreachable

 

It's not an emergency (since GoT is over), but I'm doing this remotely and won't be at the server physically for a few days. Reporting mainly in case what I did is an error others might do too.

 

Is this related to the duplicated /config folder?  The Unassigned Devices plugin?

 

Thanks

 

D

 

Hmm, definitely not the norm as you can see from most folks in this thread who've upgraded.  Will need to see diagnostics report from Tools -> Diagnostics to give you any guidance.

 

Thanks Jon;  still can't mount the GUI with my pro key after the upgrade.  I've tried rebooting in safe mode, and renaming both the 'original' /config and the /config_copy made via SSH (using transmission.app) directly on the flash. 

 

The diagnostics report is attached, as is the listing of the /boot/config directory on the flash.  The Prokey file is there with the original modification date.

 

I note that when I mount the GUI the terminal reports that

/boot/config/*.conf  (and *.dat and *.cfg)  are not found 

 

The GUI has a place to 'install a key file' if you have a key file URL.  I bought a preconfigured (5.05) flash and know not of a URL.

 

Thanks

 

D

 

 

Feel free to transfer this to the support page.

tower-diagnostics-20150619-0851.zip

Flash_config_directory.txt

Link to comment

It would seem that the new dhcp / timeout option that was added to unRAID late in the game is affecting my pfSense VM. I wrote my own powerdown start - S00.sh script to start my VM's in an order from 1 - 4 (pfSense starts first). It would seem that with the new dhcp logic it tries to connect to the network for 60 seconds and fails (because my pfSense VM has not started) and then starts unRAID and gives br0 a random IP address and then finally starts the VM's but it's too late by that time. I am assuming that even if I set the pfSense VM to autostart I am going to have the same issue? Does anyone have any thoughts on how to get my pfSense VM to start before br0 tries to get an IP address? I have pfSense assign unRAID a static IP address.

How big is the virtual disk for pfsense?

Link to comment
Guest
This topic is now closed to further replies.