Terrible cache performance


Recommended Posts

8 hours ago, John_M said:

 

Without a lot more information, neither has anyone else. Post your diagnostics and the XML for your VM. What is the Task Manager screen grab meant to show - were you doing some sort of file copying at the time?

 

Sorry I thought I posted the diagnostics. The screenshot is at idle, the only task I was running was searching for a graphics driver and even when that task was complete it was the same story.

 

1367191943_ScreenShot2018-08-26at11_40_54.thumb.png.f7675bcaa372dfa5d4eb8554ddf3c609.png

 

You can see here the process that is using the most disk usage and it little to nothing.

 

tower-diagnostics-20180826-0156.zip

Edited by jbonnett
Link to comment

One thing I notice from your disks' SMART data is a very high power cycle count on most of your disks, including your two Samsung cache SSDs:

  9 Power_On_Hours          -O--CK   099   099   000    -    3244
 12 Power_Cycle_Count       -O--CK   098   098   000    -    1205

That's a power cycle approximately every 2 hours and 41 minutes of power on time, on average. Do you have a power supply problem? As a comparison, here's the same information for one of my disks:

  9 Power_On_Hours          -O--C-   100   100   000    -    14684
 10 Unknown_SSD_Attribute   PO--C-   100   100   050    -    0
 12 Power_Cycle_Count       -O--C-   100   100   000    -    24

I would do some read/write tests from the command line, directly to a folder on /mnt/cache, bypassing the virtualisation to check if the cache is the problem or if it's fundamentally sound.

Link to comment
27 minutes ago, John_M said:

One thing I notice from your disks' SMART data is a very high power cycle count on most of your disks, including your two Samsung cache SSDs:

That's a power cycle approximately every 2 hours and 41 minutes of power on time, on average. Do you have a power supply problem? As a comparison, here's the same information for one of my disks:

I would do some read/write tests from the command line, directly to a folder on /mnt/cache, bypassing the virtualisation to check if the cache is the problem or if it's fundamentally sound.

 

I've not noticed any problems with my power supply (I doubt that I have). I used the DiskSpeed plugin and got from what I can tell are normal results:

1032364750_ScreenShot2018-08-26at13_38_40.thumb.png.42e40c4277292ce1b3e9ee8c6493823e.png

 

I don't really understand what a power cycle is?

Edited by jbonnett
Link to comment

A power cycle is a power up followed by a power down, or vice versa. It's not necessarily a problem with your power supply itself but possibly the distribution (cabling, backplanes, drive cages) whatever is between the PSU and the drives. Read performance looks fine. How about write? Are you trimming your SSDs?

 

Link to comment

I installed the Dynamix SSD Trim as soon as you mentioned it :) I tried again with 4GB but it still doesn't look right.923811203_ScreenShot2018-08-26at14_41_14.thumb.png.fe6f61907b2bd251dd23928990014793.png

 

I then used a real file

288443952_ScreenShot2018-08-26at14_44_29.thumb.png.e97f5ddc60a7eb22e2ff801e05a67824.png

 

keeping in mind the read speed of that disk that the iso is stored on is around that number (disk 1), it's a cheap NAS drive

1303563486_ScreenShot2018-08-26at14_00_20.thumb.png.e5bd01338fcf71153de100cf2a193144.png

 

In terms of the power cycle it is probably that I don't keep my machine on at all times, I only use unRAID for 2 gamers 1 pc and do a far few restarts due to software development work.

Edited by jbonnett
Link to comment

My results with a 1 GiB transfer (write, then read, then read again):

root@Lapulapu:~# dd if=/dev/zero of=/mnt/cache/tempfile.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.19082 s, 173 MB/s
root@Lapulapu:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.41437 s, 314 MB/s
root@Lapulapu:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.468517 s, 2.3 GB/s
root@Lapulapu:~#

I have my write cache settings vm.dirty_background_ratio and vm.dirty_ratio set very low (to 1% and 2%, respectively) so a 1 GiB transfer doesn't fit in RAM. Reads aren't affected, as the second one shows - it's clearly read from RAM. If your ratios are set to their default values you'll need to choose a bigger transfer to defeat the write cache. You can change the ratios with the Tip and Tweaks plugin - it helps with avoiding Out Of Memory errors too.

Link to comment

That was on a server that's using slow mSATA SSDs. These are the results for a different server using 2.5-inch SSDs:

root@Northolt:~# dd if=/dev/zero of=/mnt/cache/tempfile.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.32141 s, 463 MB/s
root@Northolt:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.91691 s, 560 MB/s
root@Northolt:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.230467 s, 4.7 GB/s
root@Northolt:~#

 

Link to comment
39 minutes ago, jbonnett said:

I set them both to 1%  tried a 8GB transfer and I still get silly speeds. Just in case you don't know from the diagnostics I have 32GB RAM.

 

Copying from /dev/zero can sometimes give silly speeds when actually writing to the drive - a number of SSD performs on-the-fly compression of data to reduce the flash wear. And the compression ratio of the contents from /dev/zero is very, very high. So the write speeds ends up being the link speed to the drive instead of the actual write speed of the drive.

Link to comment

Ok so I updated the Windows build and now my disk speed seems to ok. I did a quick test by downloading a game from steam, when steam was allocating drive space Task Manager reported 1GB/s. That looks right for two cache SSD's right? 

Edited by jbonnett
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.