unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

parity check kicked off last night at 12am as expected

 

Main > Array Operation Shows:

 

Parity-Check in progress.

Cancel will stop the Parity-Check.

Total size: 4 TB

Elapsed time: 8 hours, 1 minute

Current position: 1.69 TB (42.3 %)

Estimated speed: 53.4 MB/sec

Estimated finish: 12 hours, 2 minutes

Sync errors corrected: 0

 

Which is correct

 

Dashboard Shows:

 

Parity is valid

Last checked on Sunday, 05/01/2016, 12:00 AM (today), finding 0 errors.

Duration: unavailable (no parity-check entries logged)

 

which is incorrect - parity is running.....

 

Myk

I have the same. Probably wouldn't have noticed if you hadn't mentioned it.
Link to comment
  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I tried installing 6.2.0-beta21 via the Plugins page (pasted the URL as per first post in this thread).

Unfortunately when it booted in 6.2.0-beta21, it said my flash had no GUID ("Error - contact support"), and that my license key was invalid. Couldn't start the array of course. I removed all other USB devices, powered off and on again, same problem though.

 

I thought my flash might've failed, though it's pretty new (SanDisk Extreme 64GB). But I tried reverting back to 6.1.9 (which I had to extract and install by hand because the plugin manager and even installplg wouldn't let me install an old version). It all came good without any fuss, flash got it's GUID back and license is valid, just like nothing happened. Whew!

 

You didn't say what version you were coming from but now you have to have the key file in the config folder and I believe it has to be the only key file there.  I am assuming that you do have the pro version key. 

 

 

 

Well, both of my servers do the parity check in under 8 hours (you can see the spec below).  Hardware can influence the time.  So it might be well to list what MB and cards you are using to supply the additional SATA ports.  If you get ver6b21 working, you could post those details here along with a diagnostics file.  Otherwise, you should open a new thread in the appropriate sub-forum.

unRAID will try each key file in config until it finds the one that matches the GUID. I have both keys on both of my servers and it works fine that way.

 

Most likely as mentioned the server couldn't "phone home" due to some network issue. Another possibility is the GUID was blacklisted after 6.1.9. There has been at least one case where someone sold their key to someone else then told Limetech they needed a replacement. Replaced keys are blacklisted.

 

That is fine if the keys are for different GUID's but if a user has keys for both Basic and Pro for the same GUID then there can be issues.  (I always assume Murphy's Laws apply!)  Second point was to  make sure that the key was in the config folder as I seem to recall that with earlier versions of unRAID the key file was in the root of the the Flash Drive.  But the point that this beta does require Internet access is an  excellent one.  I hope the OP can find the which item is giving him the boot issues! 

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

Link to comment

parity check kicked off last night at 12am as expected

 

Main > Array Operation Shows:

 

Parity-Check in progress.

Cancel will stop the Parity-Check.

Total size: 4 TB

Elapsed time: 8 hours, 1 minute

Current position: 1.69 TB (42.3 %)

Estimated speed: 53.4 MB/sec

Estimated finish: 12 hours, 2 minutes

Sync errors corrected: 0

 

Which is correct

 

Dashboard Shows:

 

Parity is valid

Last checked on Sunday, 05/01/2016, 12:00 AM (today), finding 0 errors.

Duration: unavailable (no parity-check entries logged)

 

which is incorrect - parity is running.....

 

Myk

I have the same. Probably wouldn't have noticed if you hadn't mentioned it.

 

I reported this last month with the previous beta: http://lime-technology.com/forum/index.php?topic=47875.msg460729#msg460729

 

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

 

Well, it labeled a 'Check'...  If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive.  (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.)  And it is always better to find a problem before you are actually using parity to rebuilt a drive! 

 

I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"?  My question would be "why would you"?  It takes a long time for the check to run and you could be rebuilding the bad drive during that time.  If I  had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it.  I would not want to take a chance on a second drive failing while waiting for the replacement!

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

 

Well, it labeled a 'Check'...  If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive.  (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.)  And it is always better to find a problem before you are actually using parity to rebuilt a drive! 

 

I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"?  My question would be "why would you"?  It takes a long time for the check to run and you could be rebuilding the bad drive during that time.  If I  had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it.  I would not want to take a chance on a second drive failing while waiting for the replacement!

I thought in the past it wouldn't allow a parity check to run since it would be in an unprotected state.    I was wondering what the expected behavior should be.  It still shows as a valid party since the dual parity is in place, so I am assuming that is why it was allowed to run.  I figured I would let it complete, since in theory I still have dual party so should be able withstand another drive failure without loosing data.  The failed drive was relatively new, so there wasn't alot of data on it.

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

 

Well, it labeled a 'Check'...  If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive.  (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.)  And it is always better to find a problem before you are actually using parity to rebuilt a drive! 

 

I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"?  My question would be "why would you"?  It takes a long time for the check to run and you could be rebuilding the bad drive during that time.  If I  had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it.  I would not want to take a chance on a second drive failing while waiting for the replacement!

I thought in the past it wouldn't allow a parity check to run since it would be in an unprotected state.    I was wondering what the expected behavior should be.  It still shows as a valid party since the dual parity is in place, so I am assuming that is why it was allowed to run.  I figured I would let it complete, since in theory I still have dual party so should be able withstand another drive failure without loosing data.  The failed drive was relatively new, so there wasn't alot of data on it.

With dual parity and only a single missing or disabled drive then parity could be "checked" just by comparing the 2 parity drives, and nothing about the data from the data drives would really be able to tell it anything regarding whether parity was correct or not. And it wouldn't be possible to actually correct one or the other parity if they disagreed since there would be no way to know which was wrong.

 

But I do wonder what it actually does and if this is expected behavior.

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

 

Well, it labeled a 'Check'...  If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive.  (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.)  And it is always better to find a problem before you are actually using parity to rebuilt a drive! 

 

I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"?  My question would be "why would you"?  It takes a long time for the check to run and you could be rebuilding the bad drive during that time.  If I  had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it.  I would not want to take a chance on a second drive failing while waiting for the replacement!

I thought in the past it wouldn't allow a parity check to run since it would be in an unprotected state.    I was wondering what the expected behavior should be.  It still shows as a valid party since the dual parity is in place, so I am assuming that is why it was allowed to run.  I figured I would let it complete, since in theory I still have dual party so should be able withstand another drive failure without loosing data.  The failed drive was relatively new, so there wasn't alot of data on it.

With dual parity and only a single missing or disabled drive then parity could be "checked" just by comparing the 2 parity drives, and nothing about the data from the data drives would really be able to tell it anything regarding whether parity was correct or not. And it wouldn't be possible to actually correct one or the other parity if they disagreed since there would be no way to know which was wrong.

 

But I do wonder what it actually does and if this is expected behavior.

 

With dual parity and a disable disk unRAID will check and correct Q parity if found out of sync, maybe since it's using P parity to emulate the missing disk, it can check Q parity?

 

I also don't know if this is intended, maybe Tom can shed some light.

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

 

Well, it labeled a 'Check'...  If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive.  (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.)  And it is always better to find a problem before you are actually using parity to rebuilt a drive! 

 

I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"?  My question would be "why would you"?  It takes a long time for the check to run and you could be rebuilding the bad drive during that time.  If I  had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it.  I would not want to take a chance on a second drive failing while waiting for the replacement!

I thought in the past it wouldn't allow a parity check to run since it would be in an unprotected state.    I was wondering what the expected behavior should be.  It still shows as a valid party since the dual parity is in place, so I am assuming that is why it was allowed to run.  I figured I would let it complete, since in theory I still have dual party so should be able withstand another drive failure without loosing data.  The failed drive was relatively new, so there wasn't alot of data on it.

With dual parity and only a single missing or disabled drive then parity could be "checked" just by comparing the 2 parity drives, and nothing about the data from the data drives would really be able to tell it anything regarding whether parity was correct or not. And it wouldn't be possible to actually correct one or the other parity if they disagreed since there would be no way to know which was wrong.

 

But I do wonder what it actually does and if this is expected behavior.

 

With dual parity and a disable disk unRAID will check and correct Q parity if found out of sync, maybe since it's using P parity to emulate the missing disk, it can check Q parity?

 

I also don't know if this is intended, maybe Tom can shed some light.

You could use either parity to emulate the missing disk and then check/correct the other parity but if a correct was indicated that could be due to either parity being off so you really don't know more than you would by simply comparing P to Q.
Link to comment

Ok so here is one that i wouldn't say is a bug but rather a request.

 

My system has 32GB memory, i have allocated 14gb to vm1, 8gb to vm 2 so 22gb for vm's in total with my system showing 30gb used and 2gb cache

 

Earlier today i had both my vm's doing something and decided to transfer a large amount of files from a vm to my storage array - i then had vm1 close entirely but vm2 stayed on. Checking my logs i can see the syslog reports it was out of memory and killed the qemu process which it saw using most memory so it could continue the file transfer

It is great that it kept its self from running out of memory but not so great that my vm was shut off (this vm has my video recording software on it for CCTV)

 

It would be great if there was a way to set unraid to not use memory for file transfers in these sorts of systems or at least prevent it doing so if there is only x amount left. I doubt this is something new in this beta but just a heads up for anyone else that may come across this in feature and also a sort of feature request for the next beta.

 

Sorry if this is a little off topic for everyone but its the best place i thought to put it since this beta fixes so many qemu issues :)

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

 

Well, it labeled a 'Check'...  If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive.  (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.)  And it is always better to find a problem before you are actually using parity to rebuilt a drive! 

 

I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"?  My question would be "why would you"?  It takes a long time for the check to run and you could be rebuilding the bad drive during that time.  If I  had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it.  I would not want to take a chance on a second drive failing while waiting for the replacement!

I thought in the past it wouldn't allow a parity check to run since it would be in an unprotected state.    I was wondering what the expected behavior should be.  It still shows as a valid party since the dual parity is in place, so I am assuming that is why it was allowed to run.  I figured I would let it complete, since in theory I still have dual party so should be able withstand another drive failure without loosing data.  The failed drive was relatively new, so there wasn't alot of data on it.

With dual parity and only a single missing or disabled drive then parity could be "checked" just by comparing the 2 parity drives, and nothing about the data from the data drives would really be able to tell it anything regarding whether parity was correct or not. And it wouldn't be possible to actually correct one or the other parity if they disagreed since there would be no way to know which was wrong.

 

But I do wonder what it actually does and if this is expected behavior.

 

With dual parity and a disable disk unRAID will check and correct Q parity if found out of sync, maybe since it's using P parity to emulate the missing disk, it can check Q parity?

 

I also don't know if this is intended, maybe Tom can shed some light.

You could use either parity to emulate the missing disk and then check/correct the other parity but if a correct was indicated that could be due to either parity being off so you really don't know more than you would by simply comparing P to Q.

 

I don't think there is a computationally simple way of comparing P and Q directly (i.e. without involving the data disks).

 

Since P = d1 + d2 + ... + dn

 

and  Q = d12 + d22 + ... + dn2

 

it seems to me that the relationship between P and Q is not a trivial one.

 

Link to comment

I am currently running with dual parity and yesterday after noon I had a drive failure.  I pulled the bad drive out because it was making a clicking noise.  As good timing would have it, my monthly parity check kicked off a few hours later.  It seems to be running ok with no errors so far and about 3 hours left.  Should the parity check run if you have a failed disk?

 

Well, it labeled a 'Check'...  If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive.  (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.)  And it is always better to find a problem before you are actually using parity to rebuilt a drive! 

 

I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"?  My question would be "why would you"?  It takes a long time for the check to run and you could be rebuilding the bad drive during that time.  If I  had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it.  I would not want to take a chance on a second drive failing while waiting for the replacement!

I thought in the past it wouldn't allow a parity check to run since it would be in an unprotected state.    I was wondering what the expected behavior should be.  It still shows as a valid party since the dual parity is in place, so I am assuming that is why it was allowed to run.  I figured I would let it complete, since in theory I still have dual party so should be able withstand another drive failure without loosing data.  The failed drive was relatively new, so there wasn't alot of data on it.

With dual parity and only a single missing or disabled drive then parity could be "checked" just by comparing the 2 parity drives, and nothing about the data from the data drives would really be able to tell it anything regarding whether parity was correct or not. And it wouldn't be possible to actually correct one or the other parity if they disagreed since there would be no way to know which was wrong.

 

But I do wonder what it actually does and if this is expected behavior.

 

With dual parity and a disable disk unRAID will check and correct Q parity if found out of sync, maybe since it's using P parity to emulate the missing disk, it can check Q parity?

 

I also don't know if this is intended, maybe Tom can shed some light.

You could use either parity to emulate the missing disk and then check/correct the other parity but if a correct was indicated that could be due to either parity being off so you really don't know more than you would by simply comparing P to Q.

 

I don't think there is a computationally simple way of comparing P and Q directly (i.e. without involving the data disks).

 

Since P = d1 + d2 + ... + dn

 

and  Q = d12 + d22 + ... + dn2

 

it seems to me that the relationship between P and Q is not a trivial one.

 

Exactly, I believe all data disks have to be present to check/calculate Q parity, so for unRAID to be able to do it with a missing disk,  it has to use P parity to calculate data for the missing disk.

 

There's still a bug I reported on the first beta, with dual parity and two disks missing you still get the option to check (and correct) parity, all the button does is a read check like when there are no parity disks, so the label should be changed to avoid confusion.

 

Edit: Same thing with single parity and a missing disk, should display read check.

Link to comment

Still on the subject of parity, here's an interesting thing. Following an earlier discussion about the significance of errors in both P and Q parity I set my monthly parity check to be a non-correcting one and today I got my first ever unexpected error - just one, but affecting both P and Q.

 

May  1 05:00:01 Lapulapu kernel: mdcmd (436): check NOCORRECT
May  1 05:00:01 Lapulapu kernel: 
May  1 05:00:01 Lapulapu kernel: md: recovery thread: check P Q ...
May  1 05:00:01 Lapulapu kernel: md: using 1536k window, over a total of 4883770532 blocks.
May  1 07:00:02 Lapulapu root: /mnt/cache: 170.6 GiB (183155834880 bytes) trimmed
May  1 09:19:10 Lapulapu kernel: md: recovery thread: PQ incorrect, sector=4295074008
May  1 15:15:06 Lapulapu kernel: md: sync done. time=36904sec
May  1 15:15:06 Lapulapu kernel: md: recovery thread: completion status: 0

 

Completion status: 0? But it found an error and, because I told it not to, didn't correct it. At this point the Dashboard reported that "Parity is valid". Incorrect and worrying!

 

I had a think about it and decided that I had no way of knowing where the error lay. I checked the SMART reports and none of the disks was showing any errors so, after rereading Tom's comment I decided that the best thing to do was to run a correcting parity check. It is currently underway and has found the error at the same location and corrected it this time, as expected. I'll check the log again in the morning when it has finished.

 

May  1 22:17:09 Lapulapu kernel: mdcmd (445): check CORRECT
May  1 22:17:09 Lapulapu kernel: md: recovery thread: check P Q ...
May  1 22:17:09 Lapulapu kernel: md: using 1536k window, over a total of 4883770532 blocks.
May  2 02:36:46 Lapulapu kernel: md: recovery thread: PQ corrected, sector=4295074008

 

Link to comment

I'm having an issue with SMB shares yet again with beta 21 as I have in the past with 20 and 19. It seems whenever I transfers large files to or from an SMB share, it will spike my CPU on my unraid box to 50% constant, and lock up the file transfer. The only way to fix it is a reboot of the server. Has anyone else encountered this?

Link to comment

Completion status: 0? But it found an error and, because I told it not to, didn't correct it. At this point the Dashboard reported that "Parity is valid". Incorrect and worrying!

 

This is normal, completion status=0 just means that the check finished successfully, parity could still be valid, it wasn't your case but the sync error could be caused by a data disk read error.

 

Since both paritys are incorrect the error was probably on a disk, but if they all look OK you should correct it, and hope it was a spurious event, if you continue to get unexpected sync errors would should definitely investigate.

Link to comment

With dual parity and a disable disk unRAID will check and correct Q parity if found out of sync, maybe since it's using P parity to emulate the missing disk, it can check Q parity?

 

I also don't know if this is intended, maybe Tom can shed some light.

 

A little more on this, I think this is the intended behavior since it's possible to add Q parity to an array with single parity and a missing disk, looks like unRAID calculates it by using the emulated disk + plus all other data disks.

Link to comment

I'm a long time user of unRAID (2007/2008) with two original 15-drive LimeTech machines. I did upgrade them last year to modern motherboards, controllers, PSUs, etc.

 

Both machines are used as NAS-only. Just one single plugin (NERD for screen), no docker, no VM, nothing. Different Windows machines use these unRAID boxes as storage (SMB in gigabit environment)

 

Last week I went from unRAID stable to beta and something, that did drive me nuts in the past, did increase even more.

 

Some minutes ago I tried to create 130 folders and move (not copy) 130 files from a root folder on that same disk to a sub-folder. I did use a Windows machine (batch job). I did use a disk share (disk1) and this disk is ReiserFS. The disk is nearly full but should work. At some point during move Windows stopps the creating and moving. "Network path is no longer available", at that point the UI does no longer work. From the same Windows machine I can SSH to the unRAID box. The network is still there but SMB is plain dead. syslog does not show anything. The disk and the UI is no longer available. After some waitung time 20-30 minutes, I can restart the process without further actions from my side.

 

I can reproduce that since years on various unRAID releases and with my old and my new hardware. I usually don't do two things in parallel with the unRAID boxes because that fails most of the time.

 

I'm a stupid user but I would guess that this is ReiserFS thing. The XFS disks seem to be more snappy. On ReiserFS I do see thousands of reads on a disk before the first write happens. I guess that ReiserFS needs so long to read its structures that (SMB?) timeouts any activity. Just my 0.02 cent.

 

I did attach a diagnostics file but it does not show anything about the crash. Now the 20-30 minutes are over and I can restart the same creating and moving until the next crash.

 

I have to repeat I could see that in the past as well. But with the beta it happens more and more.

 

Thanks.

tower2-diagnostics-20160502-1036.zip

Link to comment

That is fine if the keys are for different GUID's but if a user has keys for both Basic and Pro for the same GUID then there can be issues.  (I always assume Murphy's Laws apply!)  Second point was to  make sure that the key was in the config folder as I seem to recall that with earlier versions of unRAID the key file was in the root of the the Flash Drive.  But the point that this beta does require Internet access is an  excellent one.  I hope the OP can find the which item is giving him the boot issues!

 

Thanks for the quick replies!

 

I was coming from 6.1.9. Just re-read the instructions and realised the method of upgrading via the Plugin URL seems to be for people running 6.2 betas already, could my problem be that I was coming from 6.1.x?

 

My key file is in /boot/config/Pro.key and is the only *.key file in there, and yes it is Pro.

 

I think I'll deal with the speed issue in a separate thread unless I get b21 going and it's still slow. It succeeds as it is, eventually, and I've been putting up with it for years across many hardware combinations, so I'd like it fixed but I'm kinda used to it. But I'll grab the diagnostics next time I run parity.

 

Thanks again for the input.

-- tallorder

Link to comment

Completion status: 0? But it found an error and, because I told it not to, didn't correct it. At this point the Dashboard reported that "Parity is valid". Incorrect and worrying!

 

This is normal, completion status=0 just means that the check finished successfully, parity could still be valid, it wasn't your case but the sync error could be caused by a data disk read error.

 

Since both paritys are incorrect the error was probably on a disk, but if they all look OK you should correct it, and hope it was a spurious event, if you continue to get unexpected sync errors would should definitely investigate.

 

Thanks Johnnie. The second parity check completed successfully after fixing the one error. I'll stick with the monthly non-correcting checks and throw in the occasional manual one too until I've convinced myself that this was a spurious event. It will be interesting to see if the Dynamix file integrity plugin reveals anything the next time it runs its check. It would normally have run yesterday but was postponed by the parity check. If it does I'll be able to replace the offending file.

 

Link to comment

I realize that UnRAID is not officially supported on VMware but I was wondering if there are any users who have successfully upgraded their virtualized 6.1.9 setups to 6.2.0-beta21 and what your experience has been.

 

If you're referring to running unRAID as a VM guest, there is currently a passthrough bug. Any hardware passed through to the unRAID VM will no longer be seen in 6.2. You might be okay if you RDM your drives, but my servers have their controllers passed through, so I can't confirm

Link to comment

I'm having an issue with SMB shares yet again with beta 21 as I have in the past with 20 and 19. It seems whenever I transfers large files to or from an SMB share, it will spike my CPU on my unraid box to 50% constant, and lock up the file transfer. The only way to fix it is a reboot of the server. Has anyone else encountered this?

I might have the same issue also, see my previous post in this thread, need some more experimenting before I can file it as a bug.

 

Sent from my SM-N920V using Tapatalk

 

 

Link to comment

If you're referring to running unRAID as a VM guest, there is currently a passthrough bug. Any hardware passed through to the unRAID VM will no longer be seen in 6.2. You might be okay if you RDM your drives, but my servers have their controllers passed through, so I can't confirm

 

I know we're in the minority and running unsupported by running unRAID inside ESXi, but for me this would be a deal breaker.  Hopefully it's something that can be fixed before 6.2 final.  I'd really hate to lose unRAID because I need to run it inside ESXi.

Link to comment
Guest
This topic is now closed to further replies.