Any news on 6.6?


Recommended Posts

 

26 minutes ago, ars92 said:

Jeez you guys are really sensitive haha. I didn't know having a two liner update post by the devs would end up causing them wasting lots and lots of time which should have been used for development :D

 

Soon

 

It's a long standing joke, and anyone who has been around these forums for a while should know that there are lots of threads like this and they are always pointless. Limetech is a very small company, perhaps half dozen people, and they typically aren't even reading these threads. And nobody has paid for an update so we have already gotten more than we paid for.

 

Link to comment
45 minutes ago, remotevisitor said:

As many as that .... I thought it was now about 3 actual employees with a couple of volunteer developers/testers helping out.

 

but then I could be way wrong as I’m just going by the activity we see here on the forums.

They are a privately held company. VERY privately. As such, who is or is not a salaried employee is not necessarily reflected by forum presence. Any guesses by forum members, myself or otherwise, are likely just that, guesses. The only person actually required for limetech to exist as a company is Tom M., as far as we know. The rest of the staff is pretty much a mystery other than, as you said, the activity here on the forums.

Link to comment
On 8/18/2018 at 7:09 PM, jonathanm said:

They are a privately held company. VERY privately. As such, who is or is not a salaried employee is not necessarily reflected by forum presence. Any guesses by forum members, myself or otherwise, are likely just that, guesses. The only person actually required for limetech to exist as a company is Tom M., as far as we know. The rest of the staff is pretty much a mystery other than, as you said, the activity here on the forums.

 

This Company is a "one man private show"??? WTF... i thought its a real company with employees and much more... strange...

So it "can" be that this project will die in the future?? Stranger than weird...

Edited by Zonediver
Link to comment
49 minutes ago, Zonediver said:

 

This Company is a "one man private show"??? WTF... i thought its a real company with employees and much more... strange...

So it "can" be that this project will die in the future?? Stranger than weird...

It is a real company, with employees and much more. Companies in the USA are not required to publicly disclose much if any info if they don't want public investment in stocks or bonds.

 

Tom M. IS the company, and for many years, was pretty much solo. Several years ago he hired on a few people to help, and here we are.

 

The future of unraid is clear, in that Tom has said there are plans in place to ensure the continuance of the project in his absence. He is not required to disclose the details of said plans, but to my knowledge he has never lied to us. I firmly believe unraid is here for the long term, and will continue to slowly grow. It may not move as fast as some here want, but that's life.

  • Like 1
Link to comment

The current version (6.5.3) seems pretty good - and by that I mean in terms of its reliability, rather than its feature set. I can't find a lot wrong with it as it stands, though over time it will lack security updates, of course. Maybe there will be a 6.5.4 release to address that if 6.6 is a long way off.

Link to comment
On 8/15/2018 at 12:18 AM, Jcloud said:

Hey things haven't nearly reached the level of, Duke Nukem Forever, or Half Life 3.  ;)

 

:DLOL. 

In reality I'm waiting for the day when Unraid(or more specifically the LInux kernel) works fine with all Ryzen CPUs, with default BIOS settings.

The first thing I would do, after an  Unraid update, would be to reset the BIOS to default settings & then boot it & then would wait till it freezes on idle.

So far, it freezes every time.::).

 

Link to comment
15 minutes ago, Shinobi said:

In reality I'm waiting for the day when Unraid(or more specifically the LInux kernel) works fine with all Ryzen CPUs, with default BIOS settings.

 

You're always going to have to tweak some BIOS settings, regardless of your CPU architecture. But unRAID 6.5.3 runs fine on a Ryzen 7 2700X in an Asus Prime X470 Pro motherboard and BIOS 4018. I haven't made any C-states changes, either in the BIOS or with the zenstates application, but some of the BIOS defaults are just not suitable - I had to enable SVM, for example, and set IOMMU to enable, rather than auto. I also had to select 2933 MHz manually as the RAM speed because it defaults to the 2133 MHz JEDEC default.

 

The situation may be different with 1000-series Ryzens and older BIOSes: I also have a very early (week 18 of 2017) Ryzen 7 1700 in a Gigabyte X370 motherboard but it's in use for a different application that unRAID, though it still uses Linux. It has C6 state disabled and I also keep it busy and it doesn't freeze.

 

The 1000-series was a brilliant first iteration by AMD and the 2000-series is a great follow-up. Both work fine with the 4.14 kernel.

Link to comment

The ryzen/threadripper issues are a linux kernel thing.

 

Not something UNRAID can solve.

 

6.6 will be on a newer kernel.

 

It's just a matter of debate if it's 4.17 or 4.18.

 

If you have issues submit a patch to help fix it in the kernel.

Edited by Dazog
Link to comment

I wonder if its possible but one thing that I would like to see is an option to turn on that if a disk is about to fail you can say take the array offline instead and cache whatever was to be wrote to the cache drive. Then you can check and see if the drive is really bad(I had a HBA issue a few days ago and about 6 months ago a cable started to go bad) The drives themselves have always been good but I have to rebuild anyways. What could be nice is if you could check the drive out then start the array again. Array attempts to do whatever it was going to do FIRST before finishing starting. It then says it can or can't. If it can't it goes back offline unless you tell it to force online in which case it fails the drive. Or you can try to fix it again. Keep in mind this would simply be an OPTION which you could turn on if you want or leave off if you want. Other things are basically "unassigned drive" but read-only so you don't have to worry about accidentally hosing a drive to make sure your data is still good on there and being unable to start the array again without it thinking its hosed and go through a rebuild.

 

Also make it where you can have the VMs run SEPARATE from the array being started. Only if you chose to put the VM stuff onto cache drive ONLY or a completely separate drive.

Edited by Jerky_san
Link to comment
18 minutes ago, Jerky_san said:

I wonder if its possible but one thing that I would like to see is an option to turn on that if a disk is about to fail you can say take the array offline instead and cache whatever was to be wrote to the cache drive. Then you can check and see if the drive is really bad(I had a HBA issue a few days ago and about 6 months ago a cable started to go bad) The drives themselves have always been good but I have to rebuild anyways. What could be nice is if you could check the drive out then start the array again. Array attempts to do whatever it was going to do FIRST before finishing starting. It then says it can or can't. If it can't it goes back offline unless you tell it to force online in which case it fails the drive. Or you can try to fix it again. Keep in mind this would simply be an OPTION which you could turn on if you want or leave off if you want. Other things are basically "unassigned drive" but read-only so you don't have to worry about accidentally hosing a drive to make sure your data is still good on there and being unable to start the array again without it thinking its hosed and go through a rebuild.

 

Also make it where you can have the VMs run SEPARATE from the array being started. Only if you chose to put the VM stuff onto cache drive ONLY or a completely separate drive.

 

That would be a very hairy thing to implement since lots of Linux functionality needs to be overriden. It can't be handled by the shfs program but instead all physical disk accesses needs to be captured - and that capture layer must recognize the difference of the magical cache drives from all other drives.

 

The reason for existence of 8- and 16-disk RAID cards is so the card is the single point of failures for _all_ disks in an array. And support for battery-backup in the controller card is intended to handle power loss or multi-disk cable failures. So the controller card can cache outstanding writes while the host can see a single disk it issues writes to. The Linux developers aren't too interested in trying to duplicate this functionality in software. ZFS is an example where the developers of a single file system introduces disk pools and tries to implement this functionality for the disks in the disk pool - this possible without modifying large amounts of Linux base functionality because all ZFS accesses will end up going through the ZFS file system code.

 

So you should then consider dropping unRAID and run a machine with ZFS - or select the unRAID advantages of single-disk accesses and easy extension of arrays but with the disadvantage that big multi-disk failures will result in a need to bring in help from the forum when suggesting the best way to recover. And remember that if you lose one disk too much with ZFS, you can't mount the remaining data disks and access the surviving file data. With unRAID, any still working data disks will retain good file data.

Link to comment
1 hour ago, pwm said:

 

That would be a very hairy thing to implement since lots of Linux functionality needs to be overriden. It can't be handled by the shfs program but instead all physical disk accesses needs to be captured - and that capture layer must recognize the difference of the magical cache drives from all other drives.

 

The reason for existence of 8- and 16-disk RAID cards is so the card is the single point of failures for _all_ disks in an array. And support for battery-backup in the controller card is intended to handle power loss or multi-disk cable failures. So the controller card can cache outstanding writes while the host can see a single disk it issues writes to. The Linux developers aren't too interested in trying to duplicate this functionality in software. ZFS is an example where the developers of a single file system introduces disk pools and tries to implement this functionality for the disks in the disk pool - this possible without modifying large amounts of Linux base functionality because all ZFS accesses will end up going through the ZFS file system code.

 

So you should then consider dropping unRAID and run a machine with ZFS - or select the unRAID advantages of single-disk accesses and easy extension of arrays but with the disadvantage that big multi-disk failures will result in a need to bring in help from the forum when suggesting the best way to recover. And remember that if you lose one disk too much with ZFS, you can't mount the remaining data disks and access the surviving file data. With unRAID, any still working data disks will retain good file data.

Sadly ZFS is where I came from because they had no way to expand besides adding pools which to me seemed wasteful. I know they are adding a way to add disks(yet to actually see it) unraid's flexibility in letting you add any old disk to the mix is what attracted me to it. At the time I didn't have enough money to drop on 16 8tb drives so unraid let me add a drive at a time.

Link to comment
6 minutes ago, Jerky_san said:

Sadly ZFS is where I came from because they had no way to expand besides adding pools which to me seemed wasteful. I know they are adding a way to add disks(yet to actually see it) unraid's flexibility in letting you add any old disk to the mix is what attracted me to it. At the time I didn't have enough money to drop on 16 8tb drives so unraid let me add a drive at a time.

 

That's why unRAID is often very high up on "best" choice. There are things other products can, that are hard or very hard to implement in unRAID. But the other products often have a couple of disadvantages that can be very significant. ZFS-based systems really are quite cool for larger enterprise systems where the cost of a handful drives can be ignored, and it's logical to add pools of disks or even one or more new machines to add storage capacity instead of adding single disks. But it isn't a good fit for most home users.

 

ZFS would need something similar to the Drobo plug-and-pray software (which would require a huge amount of underlying coding to handle auto-grow etc) to be a good general solution for home users.

 

1 minute ago, trurl said:

Keeping things simple like they are now is going to be much easier to troubleshoot and much less likely for a "smarter" system to do the wrong thing and make things worse.

 

Yes, smart solutions has a tendancy to sound great but end up smelling. With an almost infinite number of way things can fail, the "smart" systems handles lots of situations but often fails catastrophically for more uncommon corner cases. A simpler system with transparency is way easier to repair.

Link to comment

Thought I should chime in here.  In short, 6.6 will be out VERY soon, but let's take another stab at quelling the masses about our release process and communication.  In short, communicating every issue we run into as we work on a new release benefits no one.  The user community gains nothing and neither do we.  Even if we started posting status updates on where we are towards the next release, the complaints would shift to that we're not posting enough.  So instead, we communicate with folks that can actually help us get past any roadblocks we face (Linux developers, hardware manufacturers, etc.), and I think that's what most users would want us to do anyhow. 

 

Also, with regards to concerns about security, let's say we pushed out a release with Linux 4.18 and QEMU 3.0 in it for everyone, but that it caused a lot of users to experience major issues?  Would that be acceptable to anyone?  Would it be better to push the release out and just say, "I guess you'll just have to deal with it until a future kernel/QEMU update" or should we maybe hold back on pushing that release out until we've resolved critical functionality/performance issues?  The point is that just because a software update or kernel update has been publicly released, doesn't mean its been fully vetted and tested with all use-case scenarios to ensure full stability.  Furthermore, given that the overwhelmingly vast majority of our users are simply using Unraid in their home where they are the sole user anyway, I don't think exploits such as Spectre and Meltdown are that big of an issue.  And if you are hosting your Unraid server in a datacenter/multi-tenant setup where you are worried about those exploits, you should switch to another solution if our release frequency isn't fast enough for you, because we aren't going to push out security releases that break functionality or cripple system performance.

 

Bottom line:  patience is a virtue...  ;-) 

  • Like 5
  • Upvote 3
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.