unRAID Server Release 6.2 Stable Release Available


limetech

Recommended Posts

Write Speed Test

  • Write speed to the Cached Share, all other hardware and test file the same (4 GB  ISO File)
  • V5 near 100 MB/s
  • V6.1.9 max of 40 MB/s
  • V2.0 57 MB/s

 

v6.2 Write speeds are still not as fast as v5, however it is nearly 40% quicker than v 6.1.9!  The low Cache write speeds have been my biggest complaint with v6, however the benefits far outweigh this obstacle.

 

I strongly recommend looking into adjusting your disk tunables, they can make a dramatic difference.  It's possible you may be able to get close to v5 speeds again.

 

Oh my goodness... Parity check post squids modified tunables script,  set to unlimited/unteathered....  2 threads around 80%  2 up to 40%.  And I can now watch a DVD without fail or a 1080p with only an occasional stutter during the parity check.    On track to shave a goodly chunk of time off of the check too!  Awesome recommendation RobJ

 

Which script did you use this time? where you you see unlimited/unteathered?

Link to comment
  • Replies 443
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Squid's Modified Tunables Script.  2 posts down from http://lime-technology.com/forum/index.php?topic=29009.msg424206#msg424206

 

Wrong word:  Unthrottled or Best Bang are your two choices at the end of the test.  I choose Unthrottled as I have never seen this server go above 25% memory usage.

 

Same script as I used. Also chose unthrottled. My mainserver has plenty of memory and horsepower. See sometime 3 threads at 40-60%, memory never above 9%.

Link to comment

Squid's Modified Tunables Script shaved 2 hours off of the 15.5 hour parity check.

No give credit due where credit is due.  This is pauvens script.  I just made a small mod so it would work under 6.2.  And his next release should be even better. 

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

Sorry Pauven!  AWESOME script you have here.  Looking forward to your next release :)

 

A few more goodies

Write speed performance Boost - D525

Cache Drive:  Samsung 850 Pro, 256 GB

 

Max copy speed (pre-tweak) is 57 MB/s during idle & 42 MB/s during a parity check AFTER running Pauven's Disk Tuneables script

 

The following was all done on my Windows 10 machine

  • Regedit / (DWORD value) systemCurrentControlSet/Services/lanmanworkstation/parameters/
    Create the DWORD DisableBandwidthThrottling and set to 1
  • gpedit.msc / Computer Configuration/Administrative Templates/Network/QoS Packet Scheduler/Limit reservable bandwidth.  Edit, Enable, change 80 to 0
  • Settings/Updates&Security/Advanced Options/Choose how updates are delivered/Turn OFF deliveries of updates to other computers
  • CMD as administrator (default is 'normal')/ Netsh interface tcp set global autotuning=disabled
  • CMD as administrator (default is 'enabled')/ Netsh interface tcp set global rss=disabled
  • Programs&Features/ uncheck Remote Differential Compression
  • Ethernet Adapter / uncheck IPV6
  • Ethernet Adapter Properties / Set Speed & Duplex from Autonegotiate to 1.0 GBps Full Duplex    (note that I have a managed switch)
  • Disable Unused Network Adapters

And the results after rebooting the Windows 10 machine?

  • File copies DURING a dual parity check using TeraCopy are now at 45, and 64 otherwise. 
  • File copies DURING a dual parity check using Windows Explorer are now at 78, and 98 otherwise.
  • Speakeasy jumped from 22 down to 37 down.

 

Using Explorer I am now up to full copy speed...

Using TeraCopy's built in file integrity check I loose a little max speed, but this is still double my best write speed scenario under v 6.1.9 (and untweaked Windows)

Link to comment

Sorry Pauven!  AWESOME script you have here.  Looking forward to your next release :)

 

A few more goodies

Write speed performance Boost

Cache Drive:  Samsung 850 Pro, 256 GB

 

Max copy speed (pre-tweak) is 57 MB/s during idle & 42 MB/s during a parity check AFTER running Pauven's Disk Tuneables script

 

The following was all done on my Windows 10 machine

  • Regedit / (DWORD value) systemCurrentControlSet/Services/lanmanworkstation/parameters/
    Create the DWORD DisableBandwidthThrottling and set to 1
  • gpedit.msc / Computer Configuration/Administrative Templates/Network/QoS Packet Scheduler/Limit reservable bandwidth.  Edit, Enable, change 80 to 0
  • Settings/Updates&Security/Advanced Options/Choose how updates are delivered/Turn OFF deliveries of updates to other computers
  • CMD as administrator (default is 'normal')/ Netsh interface tcp set global autotuning=disabled
  • CMD as administrator (default is 'enabled')/ Netsh interface tcp set global rss=disabled
  • Programs&Features/ uncheck Remote Differential Compression
  • Ethernet Adapter / uncheck IPV6
  • Ethernet Adapter Properties / Set Speed & Duplex from Autonegotiate to 1.0 GBps Full Duplex    (note that I have a managed switch)
  • Disable Unused Network Adapters

And the results after rebooting the Windows 10 machine?

  • File copies DURING a dual parity check using TeraCopy are now at 45, and 64 otherwise. 
  • File copies DURING a dual parity check using Windows Explorer are now at 78, and 98 otherwise.
  • Speakeasy jumped from 22 down to 37 down.

 

Using Explorer I am now up to full copy speed...

Using TeraCopy's built in file integrity check I loose a little max speed, but this is still double my best write speed scenario under v 6.1.9 (and untweaked Windows)

 

This is unreal!

 

I've been chasing performance issues on my D525 for years now (I guess since v6 came out?). I have been ready for weeks to pull the $400 trigger on a new Pentium D1508 board and DDR4 RAM, but if this works for me it will save so much cash. I already spent about $100 on a new cache drive a month or two ago with no success (but I have since repurposed the old cache so that's OK)

 

I can tell you what I'll be doing when I get home from work...

Link to comment

Dead on the same with me!  v5 had close to full write speed to the cache on my supermicro D525... v6.1.9 dropped down to ~40 but was worth the tradeoff for the added functionality.  v6.2 upped the performance to about 57 when idle... and then I noticed that my Linux machine was copying around 78.  Started googling and applied, 1 at a time, each of the mentioned tweaks and each one brought another jump in copy performance using explorer copy on my Windows 10 machine.   

Link to comment

1. I threw in a cache drive to store my appdata and system folders. I made sure they both are set up CACHE DRIVE-ONLY. But I still see these two folders in the user share as well. I checked each drive independently and appdata and system folders are only on the cache drive. Just making sure I'm suppose to see those folders when I view my user share(s).

 

2. After the upgrade unraid made a folder called user0. Looks almost like a mirror of my system. Is that normal behavior in this version? I did read some really old posts that indicated user0 as well, so maybe it's a little bug that's traveled with us for a while? Is it safe to remove it?

 

I have added a couple of dockers, and all "seems" to be working good. I wish we had the opportunity to turn off these automatic creations of folders. Folders like appdata and system should be on an isolated drive, right?

 

 

Thanks,

Dizzy Mike

 

Link to comment
Spindown of SAS and other pure-SCSI devices assigned to the parity-protected array does not work (but SATA attached to SAS controller does work)

 

I have the Asrock E3C224D4I-14S server motherboard with the following:

 

4 x SATA3 by Intel C224 support RAID 0,1,5,10

8 x SAS2 from 2 x mini SAS 8087 connector by LSI 2308

 

Does that mean Unraid 6.2 will not spin down my drives?

Link to comment

Spindown of SAS and other pure-SCSI devices assigned to the parity-protected array does not work (but SATA attached to SAS controller does work)

 

I have the Asrock E3C224D4I-14S server motherboard with the following:

 

4 x SATA3 by Intel C224 support RAID 0,1,5,10

8 x SAS2 from 2 x mini SAS 8087 connector by LSI 2308

 

Does that mean Unraid 6.2 will not spin down my drives?

 

SATA drives attached to your SAS ports should work ok.  SAS drives attached to your SAS ports will be problematic.

 

Note: SAS controllers typically support both SAS drives and SATA drives.

Link to comment

FYI - Just  updated from 6.1.9

I like what I see so far!

 

File system update was smooth, I was however bit by the Docker bug. My image file was on my main cache SSD, but all my config files were on a external mounted USB SSD via Unassigned Devices. I ended up deleting the docker.img, copying all my config folders into the new appdata path, and recreated my 10 different dockers into a new image file and updated appdata paths. All my docker configs and any associated data was saved and restored except for Plex - I needed to rebuild my Plex library from scratch....

 

It looks like my custom (non template) Openelec VM might need to be rebuilt as well, I get the following error popup when I try to start it. Not a big deal, I rebuild it for fun every once in a while...  - was thinking about trying LibreELEC...

 

<< Error Starting OpenELEC VM>>

 

Execution error

 

internal error: process exited while connecting to monitor: 2016-09-21T04:31:19.353644Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/1: Operation not permitted

2016-09-21T04:31:19.353662Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on: vfio: failed to get group 1

2016-09-21T04:31:19.353668Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on: Device initialization failed

 

Cheers,

 

BR

Link to comment

1. I threw in a cache drive to store my appdata and system folders. I made sure they both are set up CACHE DRIVE-ONLY. But I still see these two folders in the user share as well. I checked each drive independently and appdata and system folders are only on the cache drive. Just making sure I'm suppose to see those folders when I view my user share(s).

All top level folders on data drives and on the Cache drive are automatically User Shares, with default share settings.  You'll want to configure their visibility and access.

 

2. After the upgrade unraid made a folder called user0. Looks almost like a mirror of my system. Is that normal behavior in this version? I did read some really old posts that indicated user0 as well, so maybe it's a little bug that's traveled with us for a while? Is it safe to remove it?

If User Shares are enabled, you'll have /mnt/user.  If you also have a Cache drive, then /mnt/user0 will also appear.  Ignore it, it's used by the Mover to move files and folders from the Cache drive to the data drives, if any shares are configured for that.  Don't remove either of the user folders.

Link to comment

Smooth upgrade from 6.1.9 to 6.2 Stable.  Quick spot checking shows Plex is working, and my LAMP server in a VM is fine.  Will check the other VMs later (some Linux, some Windows).

 

Started a parity check overnight (I juggled some disks) and that completed in 32953sec ... slightly better than average.  Looking good...

 

Have now added the 2nd parity disk and that build is running.  Watching the Dashboard, the build seems to be using 3 out of 6 cores, easily averaging below 30%, and a max usage below 50%.  Not bad for this (Cortex) somewhat older AMD system.

 

Thanks LimeTech, will report back later, but am not presently expecting any issues.

 

Quick update, Parity2 sync has now completed, 35531sec for 4TB ... again, not bad.

 

Turns out Plex is a little hit-and-miss, so will need to investigate that.  Some stuff works, some stuff is ... behaving strangely.  At a minimum, I know Plex said there was an update pending.  Will work on that as I have time.  Will also check in on the rest of the VMs.  Hopefully tomorrow...

 

In case anyone else has run into something similar, the problem was not an issue with unRaid at all ... just something weird about the PlexTV app on Apple TV that was released on Sept 15th, and fixed on the 20th.  Bad luck having a Plex bug introduced at the same time I was doing my upgrade to unRaid 6.2.  Wasted a lot of my time investigating, but glad it's finally resolved so my kid can go back to watching his favorite programs.

Link to comment

Spindown of SAS and other pure-SCSI devices assigned to the parity-protected array does not work (but SATA attached to SAS controller does work)

 

I have the Asrock E3C224D4I-14S server motherboard with the following:

 

4 x SATA3 by Intel C224 support RAID 0,1,5,10

8 x SAS2 from 2 x mini SAS 8087 connector by LSI 2308

 

Does that mean Unraid 6.2 will not spin down my drives?

 

SATA drives attached to your SAS ports should work ok.  SAS drives attached to your SAS ports will be problematic.

 

Note: SAS controllers typically support both SAS drives and SATA drives.

 

All my drives are SATA. I use the Mini-SAS to SATA cable to connect the first 8 drives.

 

Ok I will upgrade to 6.2 tonight. Hopefully everything works fine! Thanks

 

 

Link to comment

- Unlike 6.1.9, the Docker system in 6.2 no longer supports the docker.img file to be located on a disk mounted with the Unassigned Devices plugin.  You must locate it either on the Cache drive or on the array. 

 

Even if this was never officially supported, this regression is REALLY sad.

 

No that restriction is not true.  We'll update the OP.

Many many people have trouble with this. Some do not.  Probably a race condition but I still felt my contribution was justified.

 

Sent from my LG-D852 using Tapatalk

 

I haven't looked at the UA plugin for a while, as long is it mounts devices in response to 'disks_mounted' event, which takes place before services are restarted, it should work ok.

 

UD mounts its devices on the 'disks_mounted' event.

 

So what is the conclusion here Tom?

Is this now something with UD or some defect relative to 'disks_mounted' event?

Link to comment

- Unlike 6.1.9, the Docker system in 6.2 no longer supports the docker.img file to be located on a disk mounted with the Unassigned Devices plugin.  You must locate it either on the Cache drive or on the array. 

 

Even if this was never officially supported, this regression is REALLY sad.

 

No that restriction is not true.  We'll update the OP.

Many many people have trouble with this. Some do not.  Probably a race condition but I still felt my contribution was justified.

 

Sent from my LG-D852 using Tapatalk

 

I haven't looked at the UA plugin for a while, as long is it mounts devices in response to 'disks_mounted' event, which takes place before services are restarted, it should work ok.

 

UD mounts its devices on the 'disks_mounted' event.

 

So what is the conclusion here Tom?

Is this now something with UD or some defect relative to 'disks_mounted' event?

 

I don't think the issue is with the UD mounting, but maybe with the mapping of the Docker location.  There were some changes in 6.2 related to the Docker mapping.  Nothing in UD has changed in a long time.

 

Can someone provide a UD log and their system diagnostics when the Docker doesn't start on a UD mounted device?

Link to comment

Thanks everyone for all the hard work on this. I'm stuck with a problem relating to VMs. When I go to the VM tab I see my 3 previously created VMs, all off as I RTFM and set them to not auto start. I wanted to check things out before starting them so I went to create a new VM by clicking on Templates and then selecting the OpenELEC prepackaged VM. It downloads the ISO just fine and then when I go to create the the VM (leaving everything at default) it just sits on "creating  . . ." The folders are made in my appdata folder and the VM can run, but it never competes the creating step.

 

unraid-vm-error1.png

 

The next issue is that if I leave the page while it sits on creating and go back, I get an error message

Warning: libvirt_domain_xml_xpath(): namespace warning : xmlns: URI unraid is not absolute in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt.php on line 936 Warning: libvirt_domain_xml_xpath():

and the screen is messed up

unraid-vm-error2.png

It also affects the Dashboard

unraid-vm-error3.png

 

Any ideas?

Link to comment

just need to know where i can read on how to point dockers to a different network interface to use that as the Br0, or whatever so it uses that subnet.

 

 

Thank you.

That is usually done setting the container to use host network.

There are also other ways to do, but that involves command line skills. Not sure if the pipework container can be used for this.

Do a search on the forum or google and you should find some info about the subject.

Link to comment

1. I threw in a cache drive to store my appdata and system folders. I made sure they both are set up CACHE DRIVE-ONLY. But I still see these two folders in the user share as well. I checked each drive independently and appdata and system folders are only on the cache drive. Just making sure I'm suppose to see those folders when I view my user share(s).

All top level folders on data drives and on the Cache drive are automatically User Shares, with default share settings.  You'll want to configure their visibility and access.

 

2. After the upgrade unraid made a folder called user0. Looks almost like a mirror of my system. Is that normal behavior in this version? I did read some really old posts that indicated user0 as well, so maybe it's a little bug that's traveled with us for a while? Is it safe to remove it?

If User Shares are enabled, you'll have /mnt/user.  If you also have a Cache drive, then /mnt/user0 will also appear.  Ignore it, it's used by the Mover to move files and folders from the Cache drive to the data drives, if any shares are configured for that.  Don't remove either of the user folders.

 

Thank you. I'll leave everything where it is now, since it's all working properly. Long time unraid user, but first time cache drive user.

Link to comment

If User Shares are enabled, you'll have /mnt/user.  If you also have a Cache drive, then /mnt/user0 will also appear.  Ignore it, it's used by the Mover to move files and folders from the Cache drive to the data drives, if any shares are configured for that.  Don't remove either of the user folders.

Just thought I would elaborate on this a bit since I think it is good to know a little about how things work under the hood. /mnt/user0 is the user shares excluding any files still on cache. The mover script moves files / folders from /mnt/cache to /mnt/user0.
Link to comment

I've encountered a strange issue. I upgraded from 6.1.9 a couple of days ago. Rebooted, no issues. I'm running Unraid as a VM in ESXI 5.1 with a BR10i SAS card in pass through. All (sata) drives in the array are connected to the BR10i.

 

I had to power down the server today and when I powered it up, Unraid didn't start the array. None of the drives was anywhere to be seen. So I rebooted Unraid again, and there's a message during boot saying that the SAS driver can't be loaded. I can't remember exactly what it said but obviously, the SAS card isn't working.

 

Pretty strange since it has been working just fine for a few days. I downgraded to 6.2.9 again and the SAS driver loads as it should.

 

What can cause this?

Link to comment

I've encountered a strange issue. I upgraded from 6.1.9 a couple of days ago. Rebooted, no issues. I'm running Unraid as a VM in ESXI with a BR10i SAS card in pass through. All drives in the array are sata connected to the BR10i.

 

I had to power down the server today and when I powered it up, Unraid didn't start the array. None of the drives was anywhere to be seen. So I rebooted Unraid again, and there's a message during boot saying that the SAS driver can't be loaded. I can't remember exactly what it said but obviously, the SAS card isn't working.

 

Pretty strange since it has been working just fine for a few days. I downgraded to 6.2.9 again and the SAS driver loads as it should.

 

What can cause this?

 

An update: I tried it again, and it only fails loading the SAS driver with 6.2 after a power down of the VM. Just rebooting (before power down) is fine.

 

This is what it sais:

mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mptsas_scsih.c:8592/_scsih_probe()

 

According to http://www.spinics.net/lists/linux-scsi/msg95377.html it's a bug in linux kernel 4.4.19.

 

EDIT: Found a fix here https://lime-technology.com/forum/index.php?topic=49481.0

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.