unRAID Server Release 6.0-beta6-x86_64 Available


Recommended Posts

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

 

For how long has it been consistently at 1MB/s?  Have you tried accessing the webGUI from another device to see if it's a browser related issue?

Hmm, that is odd.  Mind sharing your syslog?  If you share your flash device over your network, you should be able to type this via an SSH session to your server after logging in:

 

cp /var/log/syslog /boot/syslog

 

Then browse to your flash device over your network, copy the syslog file inside, and post it either on pastebin or within a <code> snipped on the forums here.

Been running for about 30 min now, just checked from another machine and it's reporting 1.9MB/s and it's about 2GB into the parity check

 

Syslog is attached

syslog.txt

Link to comment
  • Replies 336
  • Created
  • Last Reply

Top Posters In This Topic

Reposting this as I think it got lost in all the VM chatter last night.

 

I've successfully changed the cache drive to btrfs.

 

I can now not create a cache only share. The page just refreshes.

 

Portion of the logfile just after trying.

 

Jun 18 21:18:58 Tower emhttp: shcmd (358): mkdir '/mnt/user/Test Data'
Jun 18 21:18:58 Tower shfs/user: shfs_mkdir: assign_disk: Test Data (28) No space left on device
Jun 18 21:18:58 Tower emhttp: _shcmd: shcmd (358): exit status: 1
Jun 18 21:18:58 Tower emhttp: shcmd (359): rm '/boot/config/shares/Test Data.cfg'
Jun 18 21:18:58 Tower emhttp: shcmd (360): :>/etc/samba/smb-shares.conf
Jun 18 21:18:58 Tower avahi-daemon[495]: Files changed, reloading.
Jun 18 21:18:58 Tower emhttp: Restart SMB...
Jun 18 21:18:58 Tower emhttp: shcmd (361): killall -HUP smbd
Jun 18 21:18:58 Tower emhttp: shcmd (362): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service
Jun 18 21:18:58 Tower avahi-daemon[495]: Files changed, reloading.
Jun 18 21:18:58 Tower avahi-daemon[495]: Service group file /etc/avahi/services/smb.service changed, reloading.
Jun 18 21:18:58 Tower emhttp: shcmd (363): ps axc | grep -q rpc.mountd
Jun 18 21:18:58 Tower emhttp: _shcmd: shcmd (363): exit status: 1
Jun 18 21:18:58 Tower emhttp: shcmd (364): /usr/local/sbin/emhttp_event svcs_restarted
Jun 18 21:18:58 Tower emhttp_event: svcs_restarted
Jun 18 21:18:59 Tower avahi-daemon[495]: Service "Tower" (/etc/avahi/services/smb.service) successfully established.

 

The cache drive has 117GB free. Other setting used Min. free space 4194304, Fill-up, Use cache disk Only

 

Kevin

 

Edit

 

Changed back to RFS. Still can't create cache only share.

 

Edit

 

Dropped back to 6.0b5a and cache share creation is working again.

 

Kevin.

 

 

Link to comment

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

 

For how long has it been consistently at 1MB/s?  Have you tried accessing the webGUI from another device to see if it's a browser related issue?

Hmm, that is odd.  Mind sharing your syslog?  If you share your flash device over your network, you should be able to type this via an SSH session to your server after logging in:

 

cp /var/log/syslog /boot/syslog

 

Then browse to your flash device over your network, copy the syslog file inside, and post it either on pastebin or within a <code> snipped on the forums here.

Been running for about 30 min now, just checked from another machine and it's reporting 1.9MB/s and it's about 2GB into the parity check

 

Syslog is attached

 

Ok, first and foremost, can you reboot into safe mode to see if the parity check performance issue continues?  I want to see if this is plugin related since I see you're using a bunch.  Just a quick "sanity check" before we move on in troubleshooting...

Link to comment

not sure if this is related, but after shutting down my archVM via command & trying to restart it i get the following error:

 

root@test:/mnt/cache/Apps/ArchVM# xl create arch.cfg
Parsing config from arch.cfg
failed to free memory for the domain
root@test:/mnt/cache/Apps/ArchVM#

Link to comment

not sure if this is related, but after shutting down my archVM via command & trying to restart it i get the following error:

 

root@test:/mnt/cache/Apps/ArchVM# xl create arch.cfg
Parsing config from arch.cfg
failed to free memory for the domain
root@test:/mnt/cache/Apps/ArchVM#

 

Please share your syslinux.cfg, your arch.cfg file, and your syslog.  We're checking this out today as well.

Link to comment

All,

 

We've been looking into this issue (Xen Net bug / skb rides the rocket) again since it was first reported after the launch of Beta 6.  We tested the ArchVM specifically and where it was crashing consistently in a previous internal beta on an older Linux kernel, it was not crashing in Beta 6.

 

Here's my current suggestion:

 

1)  Make a backup copy of your Arch VM image somewhere.

2)  Reboot into non-Xen mode.

3)  Attempt to run the VM in KVM mode with virsh and an XML configuration file.

 

I'm going to try and put one together today for this to see if I can get a "once-Xen" VM running under KVM without any major effort.  I suggest this because this error is specific to Xen and does not show itself with KVM.

 

I wasn't using ArchVM but my own and I did this and it is so far working well. I am still working on moving to Docker, but now it is no longer an emergency. :)

Link to comment

So, this is weird.  I had just finished applying updates in my Windows VM (issues discussed in another thread), and then told windows to shutdown, which it did.

 

I then tried to go to the extension page in unRAID, but it was unavailable.  I then realized that I had lost internet connection as well.  I shouldn't need an internet connection to access unRAID, as they are both on teh same subnet, and unRAID is wired directly into the router, which I could still access.  I could not access my modem from my laptop either, but saw the lights on the modem all appeared fine, meaning I should have internet access.  I changed a few ethernet cables, but still nothing.

 

Finally, I disconnected the ethernet cable coming from the uRAID box to my router, and viola, everything else works again.

 

So, somehow, shutting down the windows VM caused unRAID to 'saturate' my router, and prevented it from connecting to anything else.  i cannot begin to explain how this could be, but simply disconnecting my server from my router resolved my connection issues.

 

i will hard boot unRAID now, since I can't access it anymore, but wanted to report this very weird issue.

Link to comment

So, this is weird.  I had just finished applying updates in my Windows VM (issues discussed in another thread), and then told windows to shutdown, which it did.

 

I then tried to go to the extension page in unRAID, but it was unavailable.  I then realized that I had lost internet connection as well.  I shouldn't need an internet connection to access unRAID, as they are both on teh same subnet, and unRAID is wired directly into the router, which I could still access.  I could not access my modem from my laptop either, but saw the lights on the modem all appeared fine, meaning I should have internet access.  I changed a few ethernet cables, but still nothing.

 

Finally, I disconnected the ethernet cable coming from the uRAID box to my router, and viola, everything else works again.

 

So, somehow, shutting down the windows VM caused unRAID to 'saturate' my router, and prevented it from connecting to anything else.  i cannot begin to explain how this could be, but simply disconnecting my server from my router resolved my connection issues.

 

i will hard boot unRAID now, since I can't access it anymore, but wanted to report this very weird issue.

 

Ok, that is bizarre.  I can't say for certain that's what actually was happening, but I can't rule it out either.  Keep us apprised of this as you continue testing.  I am going to guess this only happens when using VMs in Xen mode.  If you boot into Xen mode and don't use VMs, no issues?  What about non-Xen mode?

Link to comment

Reposting this as I think it got lost in all the VM chatter last night.

 

I've successfully changed the cache drive to btrfs.

 

I can now not create a cache only share. The page just refreshes.

I was able to do this without a problem after I had converted my cache drive to btrfs, so there has to be another factor at play.

Link to comment

 

Ok, first and foremost, can you reboot into safe mode to see if the parity check performance issue continues?  I want to see if this is plugin related since I see you're using a bunch.  Just a quick "sanity check" before we move on in troubleshooting...

 

Thought I uninstalled the old plugins from v5, guess not. I did have apcupsd and unmenu running, disabled unmenu and booted into safe mode, showing 1.1MB/s now, current position 533MB after 5 minutes, going to let it continue to run.

 

EDIT: Think i might've found my problem. Went thru and read a large file from /mnt/disk* for each drive with dd, all drives came back with 100+ MB/s except one. That drive is dying isn't it?

Link to comment

I've read through all the threads on the latest BETA.  For those of us who are happily humming right along on 5a running ArchVM, Windows 7 etc., does it make sense to wait until the next BETA before upgrading from 5a ? The current BETA seems like a bit of a change in direction, I assume Xen could be phased out at a later date ?

 

Maybe best to wait until I hear more success stories converting existing ArchVM's to KVM etc, or even getting existing VM's working with Xen in the new release. 

 

I understand the concept of BETA SW, just trying to be wise before taking the leap :)

 

Any feedback is appreciated, successful migrations to BETA 6 comments and feedback are welcome.

 

 

 

 

Link to comment

not sure if this is related, but after shutting down my archVM via command & trying to restart it i get the following error:

 

root@test:/mnt/cache/Apps/ArchVM# xl create arch.cfg
Parsing config from arch.cfg
failed to free memory for the domain
root@test:/mnt/cache/Apps/ArchVM#

 

Atttached!!

 

Please share your syslinux.cfg, your arch.cfg file, and your syslog.  We're checking this out today as well.

Config_Logfiles.zip

Link to comment

Reposting this as I think it got lost in all the VM chatter last night.

 

I've successfully changed the cache drive to btrfs.

 

I can now not create a cache only share. The page just refreshes.

I was able to do this without a problem after I had converted my cache drive to btrfs, so there has to be another factor at play.

 

Quite possible. The only plugin was for apc. The logfile code was the same for btrfs and rsf but when I put b5a back with no other changes it worked.

 

I think I'll stay on 5a and keep a watch on this thread to see if anybody else has the problem.

 

Kevin

Link to comment

For what it's worth I moved to beta6, and my Xen archVM has continued to work without issue. It's been ~13 hours without issue (and I rebooted a few times prior to that). I have not converted my cache drive to btfrs, but have done that to an external SSD for Docker.

 

I've seen a lot of others having issues, but wanted to report that some are having success too.

 

As a note, I do have 4 vcpus pinned to dom0 (up from 2 vcups in beta5).

Link to comment

So, this is weird.  I had just finished applying updates in my Windows VM (issues discussed in another thread), and then told windows to shutdown, which it did.

 

I then tried to go to the extension page in unRAID, but it was unavailable.  I then realized that I had lost internet connection as well.  I shouldn't need an internet connection to access unRAID, as they are both on teh same subnet, and unRAID is wired directly into the router, which I could still access.  I could not access my modem from my laptop either, but saw the lights on the modem all appeared fine, meaning I should have internet access.  I changed a few ethernet cables, but still nothing.

 

Finally, I disconnected the ethernet cable coming from the uRAID box to my router, and viola, everything else works again.

 

So, somehow, shutting down the windows VM caused unRAID to 'saturate' my router, and prevented it from connecting to anything else.  i cannot begin to explain how this could be, but simply disconnecting my server from my router resolved my connection issues.

 

i will hard boot unRAID now, since I can't access it anymore, but wanted to report this very weird issue.

 

Ok, that is bizarre.  I can't say for certain that's what actually was happening, but I can't rule it out either.  Keep us apprised of this as you continue testing.  I am going to guess this only happens when using VMs in Xen mode.  If you boot into Xen mode and don't use VMs, no issues?  What about non-Xen mode?

 

Yeah, very weird.  I has a loss of internet access last night also, but didn't track it back to this.  There was a VERY severe thunderstorm last night, so I chalked loss of internet up to that.

 

Anyway, after disconnecting unRAID, I had great access to internet and the modem.  As a test, I reconnected the server to the router, and the connectivity light on the router returned to blinking VERY fast.  I could still access the internet, but any site I tried loaded VERY slowly.  it's as if unRAID was just hammering the router with traffic, causing it to basically stop responding to other traffic requests.

 

I ended up hard-booting unRAID and all is well again.  I have not launched any VM's since rebooting, and probably will not any time soon.

 

I'm going to work on cleaning my cache drive, then converting to btrfs, then moving everything back, then work on getting Docker running.  Once I have my programs running fine with docker, I'll come back to VM testing/updating/changing/etc.

 

I had inquired about the future expectations of XEN and/or KVM, but with all the various activity going on the last 2 days, I think it got lost in the noise.  I just wanted to find out if it was better/recommended to focus on getting a KVM VM going for my windows, or if XEN is still the way to go (considering where we may be in 6-12 months from now).

Link to comment

I had inquired about the future expectations of XEN and/or KVM, but with all the various activity going on the last 2 days, I think it got lost in the noise.  I just wanted to find out if it was better/recommended to focus on getting a KVM VM going for my windows, or if XEN is still the way to go (considering where we may be in 6-12 months from now).

 

I don't think anyone can answer that yet. It's all going to come down to which is better supporting the needs of the users. KVM was introduced as there were Xen issues, but Xen is easier to manage (according to Grumpy). I would say that since we are less than 24 hours into beta6 it's going to be weeks (at least) until a clearer picture is available.

 

Link to comment

I had inquired about the future expectations of XEN and/or KVM, but with all the various activity going on the last 2 days, I think it got lost in the noise.  I just wanted to find out if it was better/recommended to focus on getting a KVM VM going for my windows, or if XEN is still the way to go (considering where we may be in 6-12 months from now).

 

It has not gotten lost in the noise ;-).  It's a question that deserves an answer but we're not ready to pound the gavel yet.  We need to see how this beta plays out a little longer before we can set any expectations for the future of Xen.  I mentioned this in other parts of the forum as well, but we will be posting a blog in the next few weeks to help set expectations for the future of this project altogether.  Stay tuned and for now, happy testing!!

Link to comment

 

Ok, first and foremost, can you reboot into safe mode to see if the parity check performance issue continues?  I want to see if this is plugin related since I see you're using a bunch.  Just a quick "sanity check" before we move on in troubleshooting...

 

Thought I uninstalled the old plugins from v5, guess not. I did have apcupsd and unmenu running, disabled unmenu and booted into safe mode, showing 1.1MB/s now, current position 533MB after 5 minutes, going to let it continue to run.

 

EDIT: Think i might've found my problem. Went thru and read a large file from /mnt/disk* for each drive with dd, all drives came back with 100+ MB/s except one. That drive is dying isn't it?

 

That fixed it, swapped drives around now rebuilding at ~110MB/s. Guess I had a drive decide to fail at the exact same time I decided to try beta6. Thanks for the help jonp

Link to comment

not sure if this is related, but after shutting down my archVM via command & trying to restart it i get the following error:

 

root@test:/mnt/cache/Apps/ArchVM# xl create arch.cfg
Parsing config from arch.cfg
failed to free memory for the domain
root@test:/mnt/cache/Apps/ArchVM#

 

Atttached!!

 

Please share your syslinux.cfg, your arch.cfg file, and your syslog.  We're checking this out today as well.

 

A further update, it seems to die within minutes of starting Transmission-Daemon.....

Link to comment

 

Ok, first and foremost, can you reboot into safe mode to see if the parity check performance issue continues?  I want to see if this is plugin related since I see you're using a bunch.  Just a quick "sanity check" before we move on in troubleshooting...

 

Thought I uninstalled the old plugins from v5, guess not. I did have apcupsd and unmenu running, disabled unmenu and booted into safe mode, showing 1.1MB/s now, current position 533MB after 5 minutes, going to let it continue to run.

 

EDIT: Think i might've found my problem. Went thru and read a large file from /mnt/disk* for each drive with dd, all drives came back with 100+ MB/s except one. That drive is dying isn't it?

 

That fixed it, swapped drives around now rebuilding at ~110MB/s. Guess I had a drive decide to fail at the exact same time I decided to try beta6. Thanks for the help jonp

 

No problem.  While I'm sad to hear you have a defective drive, I'm happy to hear that it wasn't anything to do with beta 6!  Thanks for the follow up!!

Link to comment

Thank you for the massive upgrade, one question ( if this has been addressed before please let me know)

 

With the current release it feels as if docker is being pushed on us, which is fine docker is great. However, the question i have is going forward is will we continue to have xen support as i feel docker and xen are in essence paralleling the same solution.

 

IE I prefer zen and do not want to use docker, is the road map to phase in docker and out xen?

 

 

Link to comment

Thank you for the massive upgrade, one question ( if this has been addressed before please let me know)

 

With the current release it feels as if docker is being pushed on us, which is fine docker is great. However, the question i have is going forward will will continue to have xen support as i feel docker and xen are in essence paralleling the same solution.

 

IE I prefer zen and do not want to use docker, is the road map to phase in docker and out xen?

 

jonp has commented elsewhere that they haven't made any determinations on Xen/KVM. It is pretty safe to assume one of the two technologies will make into 6.0 final as the whole reason they tried KVM was they were having pass-through problems with Xen and GPU/USB. Docker doesn't have any relevance to this discussion if you are looking at building a Windows VM (which does appear to be a goal).

 

So, while it could be Xen, or it could be KVM, either way you should still be able to use VMs on UnRAID 6.0. As to which will stay? It will likely be a few weeks of testing before anything concrete is known.

Link to comment

Yes that answered my question thank you. It really boils down to PIA VPN app with the kill switch so i can run utorrent without worrying about complex IP rules or the VPN dropping off. Just easier to toss it in a windows VM (and keep the encrypted traffic separate) and go.

 

AKA 6.0 and future will have both DOCKER and XEN or KVM

Thanks for the reply.

Link to comment

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

I have a script that monitors the parity check speeds and graphs the output, I'll give it a run tonight.

Link to comment

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

I have a script that monitors the parity check speeds and graphs the output, I'll give it a run tonight.

 

Just so you know, this user (SuBNoiZe) reported back that this was due to a bad/dying drive.  He replaced and his issues went away.  Feel free to monitor / check the speeds tonight and let us know what you find!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.