Feedback Please - Unraid 6.7.2 vs. 6.8 vs. QNAP - Disk Speed Tests


wbhst83

Recommended Posts

I could use some feedback or even suggestions on how to improve things. I set out today to do some disk speed tests to show how much improvement my DYI Ryzen "bad-ass" server (at least in my mind) is compared to my old QNAP. Well, I was dealt a blow in this challenge and found that the factory QNAP seems to outperform the Unraid server.

 

So let me explain my setup and testing. I have three systems, a Ryzen Unraid server (primary server), a Qnap Unraid Server (backup server), and a QNAP test box that is factory standard. In my tests, I used my Mac to connect to SMB shares on all three boxes to a share that uses the cache drives. Docker/VM service was also disabled. I also was hard-wired to my 1GB network stack. During the test, I downgraded my two unraid boxes to 6.7.2 and tested and then brought both back to 6.8. I used two applications that I found, and they seem to track NAS performance decently. The tools are HELIOS LanTest and BlackMagic Disk Speed Test. I broke down the details of the servers and then included a screen-shot of the combined test results.

 

What I see from the results is regrettably that the factory QNAP has better results than all other setups in most cases. Secondly, I also see that 6.7.2 comes in a strong second. Maybe my testing is flawed or my setup. I also realize there are different drives in the QNAP's than the Ryzen server, but arguably, the backup server and QNAP test box should perform close to each other, right? I will say I'm still happy with Unraid and its flexibility, and it works for my needs (except the recent slowness in Plex after the 6.8 upgrade). So my question is this, are others seeing similar results?

 

Table of test results: https://take.ms/8EC3R

 

Server Specs:

Primary Unraid Server

  • ASRock X570M Pro4, Bios 2.30
  • AMD Ryzen 5 3600 6-Core 3.6Ghz
  • 32GB DDR4 3200
  • 2X 660p NVMe PCIe M.2 1TB - Cache Drives
  • 2X WDC_WD80EMAZ - Parity Drives (WD 8TB White-label Red's)
  • 3X WDC_WD80EMAZ - Array (WD 8TB White-label Red's)

 

Backup Unraid Server

  • QNAP TS-451+
  • Intel Celeron CPU J1900 @ 1.99GHz
  • 16 GB 1600 Memory
  • 1x Toshiba X300 4TB 7200 RPM 128MB Cache - Parity Drive
  • 1x Crucial MX300 1TB 3D NAND - Cache Drive
  • 2x Toshiba X300 4TB 7200 RPM 128MB Cache - Array

 

Test QNAP Box

  • QNAP TS-453A - QTS 4.4
  • Intel Celeron CPU N3160 @ 1.60GHz
  • 4GB Memory
  • 1x Crucial MX300 1TB 3D NAND - Cache Drive
  • 3x Toshiba 4TB 7200RPM 128M - Array
Link to comment

I must admit I don't know much about QNAP, and you haven't told me enough about your testing to make much sense of it. How is cache even involved? If this is strictly a transfer test involving only cache, why didn't you make that clear? If it isn't only about cache, then what does cache actually have to do with the testing?

 

Is the QNAP using RAID of some kind?

 

Do you know that Unraid is NOT RAID?

 

Do you know how caching in Unraid works?

 

Speed isn't the primary design criteria with Unraid.

 

Link to comment
9 hours ago, trurl said:

I must admit I don't know much about QNAP, and you haven't told me enough about your testing to make much sense of it. How is cache even involved? If this is strictly a transfer test involving only cache, why didn't you make that clear? If it isn't only about cache, then what does cache actually have to do with the testing?

The two Unraid setups are both identical, I have the share setup for cache to Yes. In the QNAP factory box, it is setup in a RAID 5 with a feature they call cache acceleration. Sorry I left that out in the post. "long time listener... first time caller"

 

Quote

 

Is the QNAP using RAID of some kind?

Raid 5 with Cache Acceleration 

 

Quote

 

Do you know that Unraid is NOT RAID?

Yes, and realize Unraid's primary design criteria isn't speed. But I had thought (maybe incorrectly, without facts) that it would still kick a Celeron QNAP's butt. Also don't expect the performance that I would get from our SAN that runs my employers data center. 

 

Quote

 

Do you know how caching in Unraid works?

Maybe I don't, fairly new to the platform. I had been using QNAP for 6+ years and got tired of performance issues and the constant hacking they could not stay ahead of. I recently built this system in September. Add the backup Unraid the other week, so I could begin to backup my movie collection from the primary.

 

Edited by wbhst83
Link to comment

If Unraid without tuning disk parameters setting then it perform quite slow, but you also need understand Unraid array pool not use RAID, so array pool performance often lose if compare to other solutions.


You need do more research on how to make Unraid give you magic ( also the limitation ). I also a QNAP fans before but no more for longtime.

 

 

 

Link to comment
2 hours ago, wbhst83 said:

But I had thought (maybe incorrectly, without facts) that it would still kick a Celeron QNAP's butt.

Got nothing at all to do with the CPU. RAID5 stripes data across disks so it read/writes multiple disks in parallel to access the data. Unraid read/writes a single disk if accessing the array, possibly uses multiple disks in the cache pool depending on how that is configured (which you don't specifically mention).

2 hours ago, wbhst83 said:

The two Unraid setups are both identical

According to the specs you listed, one of them has

13 hours ago, wbhst83 said:

2X 660p NVMe PCIe M.2 1TB - Cache Drives

and the other has

13 hours ago, wbhst83 said:

1x Crucial MX300 1TB 3D NAND - Cache Drive

so not at all the same with regard to the very thing you say you were testing.

 

11 hours ago, trurl said:

Do you know how caching in Unraid works?

2 hours ago, wbhst83 said:

Maybe I don't

A cache-yes share writes all data to cache if cache has sufficient capacity. Then it just stays there until the next time Mover runs. The default Mover schedule is once per day in the middle of the night. I suspect this caching model is very different than QNAPs

 

Link to comment
1 minute ago, trurl said:

RAID5 stripes data across disks so it read/writes multiple disks in parallel to access the data. Unraid read/writes a single disk if accessing the array

And this basic difference is the tradeoff between the 2 systems. RAID5 is always going to be faster. It is all about the disk I/O, not about the processor. But RAID5 disks cannot be used by themselves. If you lose more than parity can recover from, you lose it all. An individual RAID5 disk has no usable data.

 

Each disk in Unraid can be used by itself. If you lose more than parity can recover from, you haven't lost it all. Each data disk in the Unraid parity array has complete files that can be easily accessed on any Linux.

 

Also unlike RAID5, Unraid allows you to mix disks of different sizes in the array, and allows you to easily replace or add a disk without rebuilding the whole array.

 

Link to comment

On the primary unraid server, the cache disks are setup in a RAID1 .

 

Apologies, when i said "The two Unraid setups are both identical" I mean't to say the QNAP devices are very identical with the major difference being the OS. From what I see in QNAP console that it basically attempts to keep data that is accessed often and then moves off old stuff on a daily schedule. 

 

It was mentioned tweaking settings, any suggestion on tweaks I can make to help things? Again, I'm new to the OS and really happy with it. I just want to be able to pull all the performance I can out of my configuration. So just trying to learn what to do better. I looked at https://wiki.unraid.net/Improving_unRAID_Performance but appears to be dated and the tweaks mentioned either did not apply or make much of difference. 

Link to comment

The biggest boost in writing to the array is Turbo (reconstruct) write. See this thread for an explanation of the 2 different methods for updating parity and their tradeoffs:

 

https://forums.unraid.net/topic/50397-turbo-write/

 

There is a plugin that enhances this somewhat by automating that setting. Search the Apps page for turbo.

 

And, you should make sure your dockers and VMS are running on SSD cache instead of the array. You do this by making sure that appdata, domains, and system shares are all on cache with nothing on the array, and configured to stay on cache (cache-prefer or only). You can see which disks each user share is using by going to User Shares and clicking Compute... for the share. What commonly happens is people will enable dockers and VMs before actually installing cache, so they get created on the array and get stuck there because open files can't be moved.

Link to comment

I haven't note ( not browse the URL ) you are talking about random access performance, in fact, I seldom focus on this area.

HELIOS seems a good tools, so may be a time to got some test about random access. Below are result FYR. ( all test not SSD )

 

You also may ref. tweak as below

tweak.PNG.f802018a21a5afd630ca9c15a0926914.PNG

 

 

TS-851, Unraid with two 3TB disks mount by UD in RAID0

r0.thumb.PNG.5792b77b2a8842122652b5d1dee34df4.PNG

 

Q851_HELIOS.png.59cea67b42cd134cf02740f0f757c6fc.png

 

TS-251+, QTS4.4 with two 6TB disk in RAID1

Q251_HELIOS.png.7af36890f01f5d202dca932fd8cac923.png

 

DIY Unraid build and test a 8TB disk in array pool.

Z390.png.1132bd410fadee2795f01445aa79d534.png

Edited by Benson
Link to comment
22 hours ago, trurl said:

The biggest boost in writing to the array is Turbo (reconstruct) write. See this thread for an explanation of the 2 different methods for updating parity and their tradeoffs:

 

https://forums.unraid.net/topic/50397-turbo-write/

 

There is a plugin that enhances this somewhat by automating that setting. Search the Apps page for turbo.

 

And, you should make sure your dockers and VMS are running on SSD cache instead of the array. You do this by making sure that appdata, domains, and system shares are all on cache with nothing on the array, and configured to stay on cache (cache-prefer or only). You can see which disks each user share is using by going to User Shares and clicking Compute... for the share. What commonly happens is people will enable dockers and VMs before actually installing cache, so they get created on the array and get stuck there because open files can't be moved.

 

I have appdata and the plex metdata set to Yes for those shares. I do have Turbowrite installed, but I will adjust the reconstruct. Also ran into this script that I'm going to run to see if settings will perform better. 

 

Link to comment
49 minutes ago, wbhst83 said:

I have appdata and the plex metdata set to Yes for those shares.

That is exactly the WRONG setting and could cause exactly the problem of getting your stuff on array where it will perform worse, will keep disks spinning, and is difficult to get back on cache where it belongs. What I said was

23 hours ago, trurl said:

configured to stay on cache (cache-prefer or only)

Yes means write to cache then move to array.

 

Better post your diagnostics so I can take a look. It can take several steps to get this right if you already have made it wrong.

Link to comment

OK, that looks good, appdata, domains, and system all on cache and set to cache-prefer.

 

But your docker image is larger than I usually recommend. I am concerned that it is already using 20G. Are you sure you don't have something writing into the docker image? My usual recommendation is 20G for the total docker image allocation, and it shouldn't be growing.

 

Go to Docker, click on Container Size at the bottom, and post a screenshot.

  • Thanks 1
Link to comment

Please attach directly to your post in future instead of linking to unknown external sites.

 

You have more dockers than I usually run, and I know krusader is large (I just use the builtin Midnight Commander instead). Each of your plex containers are almost 3 times as large as mine. I use linuxserver.io plex instead of binhex, but I doubt there is that much difference. Maybe you are transcoding into the docker image? Are you sure your transcode path is mapped?

 

The main thing though is your docker image shouldn't grow. Has it been growing?

Link to comment

I ran the tunables test and this is what came back for the Backup Unraid Server:

*******************************************************************************
                   Unraid 6.x Tunables Tester v4.1 by Pauven

             Tunables Report produced Sun Dec 22 16:47:33 EST 2019

                              Run on server: Backup

                             Short Parity Sync Test


Current Values:  md_num_stripes=2048, md_sync_window=, md_sync_thresh=
                 Global nr_requests=Auto
                 Disk Specific nr_requests Values:
                    sde=64, sdb=64, sdd=64,


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 |  37 | 2048 |   1 |   0 |    | 181.7


--- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 |  23 | 1280 |  384 | 128 |   192  | 182.1


 --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |  13 |  768 |  384 | 128 |   376  | 178.6 |   320  | 180.1 |   192  | 178.5
  2 |  27 | 1536 |  768 | 128 |   760  | 182.9 |   704  | 182.8 |   384  | 177.5
  3 |  55 | 3072 | 1536 | 128 |  1528  | 182.6 |  1472  | 183.7 |   768  | 182.7
  4 | 111 | 6144 | 3072 | 128 |  3064  | 184.3 |  3008  | 184.4 |  1536  | 183.5

 --- TEST PASS 1_LOW (2.5 Min - 15 Sample Points @ 10sec Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |   2 |  128 |   64 | 128 |    56  | 110.5 |     0  | 112.0 |    32  | 109.2
  2 |   4 |  256 |  128 | 128 |   120  | 124.1 |    64  | 123.0 |    64  | 122.7
  3 |   6 |  384 |  192 | 128 |   184  | 168.4 |   128  | 165.8 |    96  | 167.4
  4 |   9 |  512 |  256 | 128 |   248  | 179.1 |   192  | 178.4 |   128  | 178.9
  5 |  11 |  640 |  320 | 128 |   312  | 176.2 |   256  | 172.6 |   160  | 178.4

 --- TEST PASS 1_HIGH (30 Sec - 3 Sample Points @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 | 222 |12288 | 6144 | 128 |  6136  | 184.7 |  6080  | 183.9 |  3072  | 184.7

 --- TEST PASS 1_VERYHIGH (30 Sec - 3 Sample Points @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 | 334 |18432 | 9216 | 128 |  9208  | 185.4 |  9152  | 185.1 |  4608  | 185.3

 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST ---

If the speeds changed with different values you should run a NORMAL/LONG test.
If speeds didn't change then adjusting Tunables likely won't help your system.

Completed: 0 Hrs 6 Min 23 Sec.

 

Link to comment

Ok, the transcode mount is "/config/transcode". As for size growth, I have not really tracked to be honest. However, I did get alerts in the past with the docker image size so i raised as I do tend to play with new apps and test them out so figured I needed to raise to help with that habit and my higher amount of normal dockers I run. 

Link to comment

p.s. Tnx for your little testing spreadsheet which at least confirmed i am not the only one seeing huge performance drops accessing the cache over the network (10g in my case) between 6.7.2 and 6.8.
Did a gazilion tests and all came back the same . There is something fishy in 6.8 but still no clue what it is but involves smb definately and does not hit everyone. So in your case also stick with 6.7.2

 

example of others reporting issues with smb performance degradation

 

Edited by glennv
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.