Jump to content

unRAID Server Release 6.2.0-beta21 Available


limetech

Recommended Posts

  • Replies 545
  • Created
  • Last Reply

What is the disadvantage of Turbo Write?  It made a huge difference in my test array with 4  8tb seagate drives.  Difference between sustained 16MB to 40MB moving Terabyte of files.

All array drives must be spun up for any writes. Normal operation only parity + target drive must be spun up for a write.

 

This may not really be a disadvantage, particularly when it allows you to complete the write much quicker, allowing all drives to spin back down again much sooner than otherwise.

Link to comment

What is the disadvantage of Turbo Write?  It made a huge difference in my test array with 4  8tb seagate drives.  Difference between sustained 16MB to 40MB moving Terabyte of files.

 

See here for a discussion of of Turbo Write:

 

    http://lime-technology.com/forum/index.php?topic=47839.0

 

By the way, you have the perfect type of array for its use with only four drives in your array. A slight penalty of increased power usage vs the cost of any parity drive. 

Link to comment

I'm in no way an expert here, but what's curious is that the internal bridge is set up, then disabled.  That would leave anything using it hanging.  That may not be important though, as nothing has had time to begin using it.

 

Some possible steps to alter what's happening, try removing dnsmasq from the equation (don't know if you can).  And try disabling avahi, just to see if anything changes.  And perhaps try it without the internal bridging.

 

Thanks for the feedback.  I've been waiting for my parity to rebuild since last night before trying any of your suggested changes.  However, while the parity was rebuilding I did slowly start enabling dockers and monitored their effect on the load of the system.  I went in thinking Plex would be a huge load on the system, but in the end I was able to load all dockers except Sabnzbd.  Once the parity rebuilt, I enabled Sabnzbd and the load went through the roof and the web ui stopped responding.  Sniffing around while that happened, I noticed that for some reason I was unable to read (ls) the Incomplete folder (on disk 7, not user share) that Sabnzbd was also trying to read from when it hung.  After a reboot I started a btrfs scrub on the drive in question to make sure it wasn't an FS error that was causing Sabnzbd to lock up, which would in turn took out the web ui.

 

So while the jury is still out until the scrub is finished (i'm 1TB into a 3 TB drive), it seems that if I don't run the Sabnzbd docker then all is well.  My server was stable enough to rebuild dual 4TB parity drives in about 15 hours without Sabnzbd running.

 

Interestingly, as a test I've created an Ubuntu JEOS 14.04 VM and installed Sabnzdbd there.  It has been happily running stable for the last hour or so, even with the host server load staying above 15 on the system.  So it's almost as though Docker is choking when Sabnzbd would have a lot to process, but KVM keeps chugging along.  I would have the thought with less overhead in Docker it would perform better.

 

So, again thanks for the suggestions, but I'm reluctant to try them at the moment as things are somewhat stable, with everything at least working.  I still need figure my way through the various combinations of technologies, but in the end it is amazing what this one box is doing.  It's:

 

- a 73 TB media server

- a 9 TB ZFS RAID5 server with Samba and 3 years and counting of Windows Previous Versions for all my documents, family photos, etc.

- a downloading powerhouse (Sabnzbd, Sonarr, FlexGet, etc)

- a CrashPlan offsite backup enabler for my 9TB of critical data above, plus selected media (thanks to the 9p passthrough)

 

I've consolidated multiple machines into this thing and thanks to KVM, passthrough, 9p, virtio and all the other Linux and Unraid goodness its running on less electrical power, less equipment and more then adequate performance for home use. 

 

I know I sound like an infomercial/fanboy, but it really is awesome when you step back and think about it (and when everything works! lol).

 

So, good job LT/volunteers/open source communities!  ;)

Link to comment

Switch to NZBGet and never look back :)

 

I'm in no way an expert here, but what's curious is that the internal bridge is set up, then disabled.  That would leave anything using it hanging.  That may not be important though, as nothing has had time to begin using it.

 

Some possible steps to alter what's happening, try removing dnsmasq from the equation (don't know if you can).  And try disabling avahi, just to see if anything changes.  And perhaps try it without the internal bridging.

 

Thanks for the feedback.  I've been waiting for my parity to rebuild since last night before trying any of your suggested changes.  However, while the parity was rebuilding I did slowly start enabling dockers and monitored their effect on the load of the system.  I went in thinking Plex would be a huge load on the system, but in the end I was able to load all dockers except Sabnzbd.  Once the parity rebuilt, I enabled Sabnzbd and the load went through the roof and the web ui stopped responding.  Sniffing around while that happened, I noticed that for some reason I was unable to read (ls) the Incomplete folder (on disk 7, not user share) that Sabnzbd was also trying to read from when it hung.  After a reboot I started a btrfs scrub on the drive in question to make sure it wasn't an FS error that was causing Sabnzbd to lock up, which would in turn took out the web ui.

 

So while the jury is still out until the scrub is finished (i'm 1TB into a 3 TB drive), it seems that if I don't run the Sabnzbd docker then all is well.  My server was stable enough to rebuild dual 4TB parity drives in about 15 hours without Sabnzbd running.

 

Interestingly, as a test I've created an Ubuntu JEOS 14.04 VM and installed Sabnzdbd there.  It has been happily running stable for the last hour or so, even with the host server load staying above 15 on the system.  So it's almost as though Docker is choking when Sabnzbd would have a lot to process, but KVM keeps chugging along.  I would have the thought with less overhead in Docker it would perform better.

 

So, again thanks for the suggestions, but I'm reluctant to try them at the moment as things are somewhat stable, with everything at least working.  I still need figure my way through the various combinations of technologies, but in the end it is amazing what this one box is doing.  It's:

 

- a 73 TB media server

- a 9 TB ZFS RAID5 server with Samba and 3 years and counting of Windows Previous Versions for all my documents, family photos, etc.

- a downloading powerhouse (Sabnzbd, Sonarr, FlexGet, etc)

- a CrashPlan offsite backup enabler for my 9TB of critical data above, plus selected media (thanks to the 9p passthrough)

 

I've consolidated multiple machines into this thing and thanks to KVM, passthrough, 9p, virtio and all the other Linux and Unraid goodness its running on less electrical power, less equipment and more then adequate performance for home use. 

 

I know I sound like an infomercial/fanboy, but it really is awesome when you step back and think about it (and when everything works! lol).

 

So, good job LT/volunteers/open source communities!  ;)

Link to comment

I believe the current recommendation is to use XFS for array devices, no?

That's my understanding of the common opinion on the forum, can't say for sure that's LT's official stance though...

 

Sent from my LG-H815 using Tapatalk

 

 

Link to comment

Switch to NZBGet and never look back :)

 

I had switched to NZBGet for a few months up until about a week ago.  I finally gave up on it as no matter how much I tweaked the settings I couldn't get it to saturate my download speed.  It kept pulsing on and off and left a lot of unused bandwidth on the table.  After my previous years of Sabnzbd I figured I'd go back and it worked perfect right out of the box.

 

I believe the current recommendation is to use XFS for array devices, no?

 

Yes, I believe the LT stance is BTRFS is used mainly for cache.  Using it on array drives is considered experimental (my words, not theirs, but the gist I took away from reading the forums).  I converted the 29 drives back from ZFS single drive zvols and SnapRaid dual parity on Ubuntu 14.04, so the loss of checksum's and snapshots was too big a deal for me to use XFS.  The fact that the BTRFS on disk format is considered stable combined with the facts that Unraid doesn't use any real quirky BTRFS features on data drives and ships with a very recent 4.4.6 kernel on the 6.2 betas led me to take a chance on BTRFS.

 

At the end of the day nothing on the Unraid data drives is truly critical for me.  Anything critical is on my 9TB ZFS (BTW, I LOVE ZFS!) array which I have hosted in an Ubuntu 16.04 server VM with the motherboards SATA controller passed through in (same Unraid 6.2 box).  The VM has a Python script I wrote that manages snapshots (72 hourly, 365 daily, 156 weekly, 600 monthly) and also backs up via CrashPlan to a secondary 6.1.9 XFS Unraid box onsite and offsite to the CrashPlan cloud.  So anything valuable is taken care of.

 

So far I have to say that BTRFS on Unraid has been rock solid, but time will tell.  I have cranked this 6.2 box enough times this week that I thought for sure I'd have troubles.  So far all my scrubs have come back clean, so my faith in BTRFS is slowly growing.  Maybe one day it'll earn back some of the respect it lost during the early releases, but right now it's still not something I'd recommend to anyone as a primary solution.  I lost data to it years ago and swore I'd never look at it again, but I have to admit it is gradually winning me back...

 

In my blue-sky world, hopefully BTRFS remains stable so its scope with Unraid could be expanded.  Subvolume support so we could use snapshots would be a welcome addition to Unraid's toolset, at least in my eyes.  Using snapshots combined with Samba shadow vfs support (Windows Previous Versions) converted me to ZFS years ago and it'd be nice if we could get there with BTRFS built in to the kernel (no ZFS modules!) while still maintaining the "appliance"-like nature of Unraid.

 

All this is my humble opinion of course and like politics and religion, file systems can be divisive topics.  :)

 

 

Link to comment

How do you downgrade this to 6.1.9?

 

I tried installing from the plugins menu, but it posts "plugin: not installing older version"

 

Will there be a 6.2.0.beta22 soon? Fresh install of Windows 10 VM being broken and locking up the system is really bumming me out.

 

You need to manually revert.

 

Yes, the next version is always soon™.

Link to comment

How do you downgrade this to 6.1.9?

 

I tried installing from the plugins menu, but it posts "plugin: not installing older version"

 

Will there be a 6.2.0.beta22 soon? Fresh install of Windows 10 VM being broken and locking up the system is really bumming me out.

 

How do you downgrade this to 6.1.9?

 

I tried installing from the plugins menu, but it posts "plugin: not installing older version"

 

Will there be a 6.2.0.beta22 soon? Fresh install of Windows 10 VM being broken and locking up the system is really bumming me out.

 

You need to manually revert.

 

Yes, the next version is always soon™.

Download the 6.1.9 release and extract the bzroot and bzimage files.  Copy these over to your flash drive and delete thr bzroot-gui file that is present on your flash drive. 

 

That will downgrade to 6.1.9

 

There will still be a menu item for Unraid GUI but it won't do any harm.

 

Sent from my LG-H815 using Tapatalk

 

 

Link to comment

After upgrading from 6.1.9 it will not boot properly, keeps saying that it's looking for /dev/disk/by-label/UNRAID. Double checked and label is fine, so I assume that new version does not like my USB key, which is: Bus 002 Device 002: ID 0781:5567 SanDisk Corp. Cruzer Blade

Installed 6.1.9 back, restored config, booted immediately.

Link to comment

After upgrading from 6.1.9 it will not boot properly, keeps saying that it's looking for /dev/disk/by-label/UNRAID. Double checked and label is fine, so I assume that new version does not like my USB key, which is: Bus 002 Device 002: ID 0781:5567 SanDisk Corp. Cruzer Blade

Installed 6.1.9 back, restored config, booted immediately.

I saw a post yesterday where someone had problems similar and it was because, although the key was called UNRAID it had been entered using an OS that did not use English as it's language. 

 

Is there a chance that this also applies to you?

 

Sent from my LG-H815 using Tapatalk

 

 

Link to comment

I just went to shut down my server as I was going to upgrade a 2TB to 3TB drive and I had issues shutting down. The server was up in  gui mode but I did the shut down through the web ui on another machine.

 

First I manually stopped the dockers one a at a time, then unmounted the array, then using the dynamix buttons, selected shut down. I got some errors on the screen during shut down which I took pictures of and as well I am attaching a diagnostic file for review.

 

 

***EDIT** A second attempt was made, this time I did the following:

 

Click on the button at the bottom to stop the array, letting it shutdown the dockers on its own.

Click on the shutdown button at the bottom letting it shut the system down, which it did without any errors I could see on the screen connected to the server.

tower-diagnostics-20160414-1107.zip

IMG_1103.jpg.808d0d8dc532dc6e87ed818bac36e049.jpg

Link to comment

After upgrading from 6.1.9 it will not boot properly, keeps saying that it's looking for /dev/disk/by-label/UNRAID. Double checked and label is fine, so I assume that new version does not like my USB key, which is: Bus 002 Device 002: ID 0781:5567 SanDisk Corp. Cruzer Blade

Installed 6.1.9 back, restored config, booted immediately.

I saw a post yesterday where someone had problems similar and it was because, although the key was called UNRAID it had been entered using an OS that did not use English as it's language. 

 

Is there a chance that this also applies to you?

 

Sent from my LG-H815 using Tapatalk

 

No, using English. Also tried beta21 with clean install (no previous config or any modification) and same happened.

Link to comment

I'd try another USB key with a fresh install and just see if it boots.  It makes me wonder if it's a problem with your USB key...

 

Sent from my LG-H815 using Tapatalk

 

That should be the way, but not going to buy a new usb key just for testing :D since 6.1.9 stable works fine

Still, included the usb identifier, maybe some dev will add an exception for it ^^

Bus 002 Device 002: ID 0781:5567 SanDisk Corp. Cruzer Blade

Link to comment

Is this safe to use on a main server?

My experience is that this is stable enough to use on MY main server.  However I realise it is a beta, and so have complete backups of the servers contents if anything goes wrong.    However I have been running it for over a week so far with no issues.

 

If you are making extensive use of dockers and/or VMS then is likely to be less stable for those purposes.    However there are also new features in these areas so it is a risk trade-off that you need to a make for yourself.

Link to comment

Is this safe to use on a main server?

My experience is that this is stable enough to use on MY main server.  However I realise it is a beta, and so have complete backups of the servers contents if anything goes wrong.    However I have been running it for over a week so far with no issues.

 

If you are making extensive use of dockers and/or VMS then is likely to be less stable for those purposes.    However there are also new features in these areas so it is a risk trade-off that you need to a make for yourself.

 

Same here -- I've found it reliable enough for my main server.  Caution is good in these sorts of things of course, but it is beta21 after all.  :)

Link to comment

Is this safe to use on a main server?

My experience is that this is stable enough to use on MY main server.  However I realise it is a beta, and so have complete backups of the servers contents if anything goes wrong.    However I have been running it for over a week so far with no issues.

 

If you are making extensive use of dockers and/or VMS then is likely to be less stable for those purposes.    However there are also new features in these areas so it is a risk trade-off that you need to a make for yourself.

 

It is beta21 after all.  :)

true, but we did not see the first 17 beta releases, so it is only the 4th public beta release.
Link to comment

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...