unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

Apologies if this has been mentioned before but is there any word on a definitive non-beta 6.2 release? I'm looking to virtualize my gaming rig but i'd like to avoid the upgrade bump and only do it after a final 6.2 release.

They don't do that sort of thing around here. ;D
Link to comment
  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Apologies if this has been mentioned before but is there any word on a definitive non-beta 6.2 release? I'm looking to virtualize my gaming rig but i'd like to avoid the upgrade bump and only do it after a final 6.2 release.

Some time in the coming YEARS it will be out! ;D
Link to comment

Apologies if this has been mentioned before but is there any word on a definitive non-beta 6.2 release? I'm looking to virtualize my gaming rig but i'd like to avoid the upgrade bump and only do it after a final 6.2 release.

 

I've got a second parity drive just sitting there waiting for the upgrade to 6.2. I also want to do the virtualized gaming PC like you, but haven't bought a GPU for it yet. I've read enough posts on this forum to know that, as others have jokingly indicated, there are no hard release dates for unRAID. It'll be ready when it's ready. A lot of issues have been fixed via the public betas so far and Limetech has been releasing updated versions fairly regularly but, based on this thread, there are still some major issues to be addressed before a "stable" release is offered. I really, really want that dual parity enabled as I've lost too many drives in the past to be comfortable without it but I'm still somewhat patiently waiting until all these guinea pigs help work out the bugs in the betas before I upgrade. ;D

Link to comment

I've got a second parity drive just sitting there waiting for the upgrade to 6.2. I also want to do the virtualized gaming PC like you, but haven't bought a GPU for it yet. I've read enough posts on this forum to know that, as others have jokingly indicated, there are no hard release dates for unRAID. It'll be ready when it's ready. A lot of issues have been fixed via the public betas so far and Limetech has been releasing updated versions fairly regularly but, based on this thread, there are still some major issues to be addressed before a "stable" release is offered. I really, really want that dual parity enabled as I've lost too many drives in the past to be comfortable without it but I'm still somewhat patiently waiting until all these guinea pigs help work out the bugs in the betas before I upgrade. ;D

 

IMO the NAS part of v6.2 is working solid, including dual parity, all my servers are on v6.2, most outstanding issues seem to be virtualization related, since you're not using it I would upgrade, it's beta, there are some risks, but probably less than using a big array with single parity, also it's easy to downgrade to v6.1 if you encounter any issues.

Link to comment

IMO the NAS part of v6.2 is working solid, including dual parity, all my servers are on v6.2, most outstanding issues seem to be virtualization related, since you're not using it I would upgrade, it's beta, there are some risks, but probably less than using a big array with single parity, also it's easy to downgrade to v6.1 if you encounter any issues.

Its hard to tell if virtualisation is cause or effect to the issue.

I think there are some, that don't use vms, or at least do not have them on the array. It may just be, that the vm-workload is just an easy way to find issues, that came with introducing smb3 or dual parity. And in my case, its "just" the array that stops working... anything that has nothing to do with the array works fine.

 

So judging from my beta experience, everything is working, but the NAS part may have some unidentified issues under certian workloads...

 

 

Soon™

For the next beta version, Soon™ is okay. 8)

For the final release, I prefer when it's done™... indicates thats the release date is more based on the state of the product ;)

 

Now ?-------------- Very Soon -------- Soon™ -------- Soon-ish(er) ------- Soon-ish --------? End of Time | when it's done™

Link to comment

I think I can reliably reproduce hang similar to dAigo.

 

Firstly, I ran "top" on the console - to get some minimal monitoring since I know the GUI will hang.

Created a batch file in my Win 10 VM to move a 6GB file back and forth between 2 shares.

 

The job will fail before finishing the 2nd copy task.

It fails with "An unexpected network error occur" (see screenshot).

 

What I observed:

  • As long as I don't access ANY network-related thing after the error, I can actually shutdown the VM ("top" in console then doesn't show qemu-syst+ anymore - which I presume means the VM has terminated).
  • If I do anything with the network relating to the tower e.g. open a text file on a share, the VM would hang and does not recover. Shares tested include array-only, cache array and cache-only locations.
  • I still can browse the Internet in the VM and access other part of my network e.g. other PC's shares from inside the VM.
  • I can still ping the tower fine from outside the VM. - ping screenshot attached
  • There is no drive activity at all during the hang.
  • The GUI dashboard would hang showing CPU at 100%. However, top from the console says only qemu-syst+ is using CPU and it's only about 12% and nothing else that can add up to 100%.
  • As long as I don't exit top, top would still work and update itself. If I exit top and type any command, e.g. reboot / powerdown, the console also hangs and does not recover.
  • After a force hard reset (actually several), everything mounts back to array fine.

 

All of the above suggests the hang itself comes from the array.

 

If it were the network / VM / memory etc. component then I wouldn't have been able to browse the Internet, monitor "top", access other parts of network from VM etc.

 

My configuration:

  • 8GB RAM, 4GB for VM
  • Cache SSD in btrfs
  • Single parity
  • Array in ZFS
  • VNC graphic
  • No other VM / docker running during the test

Dashboard.PNG.94a79f0d1d6f0ff90607ae52b045d583.PNG

Top.PNG.8d5d5d871f8a9604edca5ff01d742c01.PNG

VNC_screenshot.PNG.d33fbb8681b268a6fcea08f29e3618a4.PNG

Ping.PNG.957cdb508516ac0d288d8c1c0646eb2b.PNG

Link to comment

Calling attention to a networking fix that may or may not help some here.  It's found here.  Basic class of affected users - on boot the system works great, then at some point there's a huge drop in networking performance, which possibly causes other issues too.  I don't see a connection with those who have hard freezes.  It happens to primarily involve Intel NIC's (possibly a few Broadcoms too), where one of its strengths turns into a weakness, its ability to offload from the CPU some of the networking load.

 

Here is a good explanation of the issue.  He doesn't mention virtualization at all, but a Google search finds many with this issue ARE associating it with virtualization (and not just KVM either).  He also ties it to MTU confusion only, but I suspect there are additional sources of trouble.  He only mentions disabling Large Send Offload, but a few others online also mention disabling TCP Offload and TCP Chksum Offload.  I don't know, but it may be worth experimenting, for someone with the issue.

Link to comment

some beginner questions, i just got started with unraid and managed to set up some vms with all the info on this forum, but ive ran into one issue with an encrypted windows vm:

At boot before windows i need to type in the encryption password but the keyboard/mouse dont work yet, any way around this?(vnc works but i wanna run it without), i noticed my motherboard has 3 usb controllers in devices, so maybe i can passthrough one of them?

windows764, seabios, tried both q35 i440. also tried 2 different keyboards just incase.

 

as a side note at first i had problems even getting the screen showing anything before windows, but manually supplying the romfile as mentioned earlier worked great with that problem!(hd6950 card)

<rom bar='on' file='/mnt/user/share/romfile.rom'/>

 

also some feedback on beta21, I noticed the ssd 4k  write speeds more than doubled without any tweaking compared to earlier version! from 16mb/s 4k and 60mb/s 4kq32 to 32mb/s and 200mb/s  (iam not sure how accurate these numbers are from crystaldiskmark and how much cacheing is going on but the 4k writes were really low in earlier versions)

still looking for any tweaks to take it closer to native speeds for this version, is the data plane tweak still relevant?

 

Link to comment

some beginner questions, i just got started with unraid and managed to set up some vms with all the info on this forum, but ive ran into one issue with an encrypted windows vm:

 

You really oughta NOT be trying beta s/w when you have not even tried unRAID stable yet...  Your chances of telling if any given problem is either (a) beta related (b) warmware (ie you) or © inexperience with unRAID are about zero...

Link to comment

some beginner questions, i just got started with unraid and managed to set up some vms with all the info on this forum, but ive ran into one issue with an encrypted windows vm:

 

You really oughta NOT be trying beta s/w when you have not even tried unRAID stable yet...  Your chances of telling if any given problem is either (a) beta related (b) warmware (ie you) or © inexperience with unRAID are about zero...

i have ran stable how else could i have done my comparisons? i thought my keyboard question was fairly specific, although it might apply to earlier versions of unraid so maybe i should have asked in another thread...

Link to comment

I tried installing 6.2.0-beta21 via the Plugins page (pasted the URL as per first post in this thread).

 

This was on my production array, even though this is a beta version, because I've had very slow parity rebuilds for ages - like 3-4 days slow, with 13 data drives - and I have Marvell controllers which might be like the ones mentioned in the release notes. Figured it'd worth the risk if I could possibly get some speed ups there.

 

Unfortunately when it booted in 6.2.0-beta21, it said my flash had no GUID ("Error - contact support"), and that my license key was invalid. Couldn't start the array of course. I removed all other USB devices, powered off and on again, same problem though.

 

I thought my flash might've failed, though it's pretty new (SanDisk Extreme 64GB). But I tried reverting back to 6.1.9 (which I had to extract and install by hand because the plugin manager and even installplg wouldn't let me install an old version). It all came good without any fuss, flash got it's GUID back and license is valid, just like nothing happened. Whew!

 

I didn't keep logs, for now I'm assuming that it's me doing something wrong and since it didn't go smoothly my priority was getting things back to where they were. But now that I know I can roll back to the current stable version easily, I'd be happy to run experiments if this seems like it might be more than my own error.

 

Any suggestions?

 

Incidentally, is 3-4 days actually excessively slow for a parity rebuild across an array with 13 drives, parity 6TB and total size 31TB per Main tab? Or am I just being impatient? ;)

 

Thanks

-- tallorder

Link to comment

Does you server have Internet access?  This is a requirement for running the beta.

 

Regarding the parity sync, then it is primarily determined by the size of the parity disk, not by the amount of data it is protecting.  Having said that I would only expect 6TB to take about a day, so perhaps you should post your diagnostics (Tools->diagnostics) to see if anyone can spot an issue.

Link to comment

I tried installing 6.2.0-beta21 via the Plugins page (pasted the URL as per first post in this thread).

Unfortunately when it booted in 6.2.0-beta21, it said my flash had no GUID ("Error - contact support"), and that my license key was invalid. Couldn't start the array of course. I removed all other USB devices, powered off and on again, same problem though.

 

I thought my flash might've failed, though it's pretty new (SanDisk Extreme 64GB). But I tried reverting back to 6.1.9 (which I had to extract and install by hand because the plugin manager and even installplg wouldn't let me install an old version). It all came good without any fuss, flash got it's GUID back and license is valid, just like nothing happened. Whew!

 

You didn't say what version you were coming from but now you have to have the key file in the config folder and I believe it has to be the only key file there.  I am assuming that you do have the pro version key. 

 

I didn't keep logs, for now I'm assuming that it's me doing something wrong and since it didn't go smoothly my priority was getting things back to where they were. But now that I know I can roll back to the current stable version easily, I'd be happy to run experiments if this seems like it might be more than my own error.

 

Any suggestions?

 

Incidentally, is 3-4 days actually excessively slow for a parity rebuild across an array with 13 drives, parity 6TB and total size 31TB per Main tab? Or am I just being impatient? ;)

 

Thanks

-- tallorder

 

Well, both of my servers do the parity check in under 8 hours (you can see the spec below).  Hardware can influence the time.  So it might be well to list what MB and cards you are using to supply the additional SATA ports.  If you get ver6b21 working, you could post those details here along with a diagnostics file.  Otherwise, you should open a new thread in the appropriate sub-forum. 

Link to comment

I tried installing 6.2.0-beta21 via the Plugins page (pasted the URL as per first post in this thread).

Unfortunately when it booted in 6.2.0-beta21, it said my flash had no GUID ("Error - contact support"), and that my license key was invalid. Couldn't start the array of course. I removed all other USB devices, powered off and on again, same problem though.

 

I thought my flash might've failed, though it's pretty new (SanDisk Extreme 64GB). But I tried reverting back to 6.1.9 (which I had to extract and install by hand because the plugin manager and even installplg wouldn't let me install an old version). It all came good without any fuss, flash got it's GUID back and license is valid, just like nothing happened. Whew!

 

You didn't say what version you were coming from but now you have to have the key file in the config folder and I believe it has to be the only key file there.  I am assuming that you do have the pro version key. 

 

I didn't keep logs, for now I'm assuming that it's me doing something wrong and since it didn't go smoothly my priority was getting things back to where they were. But now that I know I can roll back to the current stable version easily, I'd be happy to run experiments if this seems like it might be more than my own error.

 

Any suggestions?

 

Incidentally, is 3-4 days actually excessively slow for a parity rebuild across an array with 13 drives, parity 6TB and total size 31TB per Main tab? Or am I just being impatient? ;)

 

Thanks

-- tallorder

 

Well, both of my servers do the parity check in under 8 hours (you can see the spec below).  Hardware can influence the time.  So it might be well to list what MB and cards you are using to supply the additional SATA ports.  If you get ver6b21 working, you could post those details here along with a diagnostics file.  Otherwise, you should open a new thread in the appropriate sub-forum.

unRAID will try each key file in config until it finds the one that matches the GUID. I have both keys on both of my servers and it works fine that way.

 

Most likely as mentioned the server couldn't "phone home" due to some network issue. Another possibility is the GUID was blacklisted after 6.1.9. There has been at least one case where someone sold their key to someone else then told Limetech they needed a replacement. Replaced keys are blacklisted.

Link to comment

parity check kicked off last night at 12am as expected

 

Main > Array Operation Shows:

 

Parity-Check in progress.

Cancel will stop the Parity-Check.

Total size: 4 TB

Elapsed time: 8 hours, 1 minute

Current position: 1.69 TB (42.3 %)

Estimated speed: 53.4 MB/sec

Estimated finish: 12 hours, 2 minutes

Sync errors corrected: 0

 

Which is correct

 

Dashboard Shows:

 

Parity is valid

Last checked on Sunday, 05/01/2016, 12:00 AM (today), finding 0 errors.

Duration: unavailable (no parity-check entries logged)

 

which is incorrect - parity is running.....

 

Myk

 

Link to comment
Guest
This topic is now closed to further replies.