Jump to content

How to maximise Windows VM to Array HDD throughput?


Crad

Recommended Posts

Heya,

 

I have a windows VM I use for processing high resolution images. I've been running some tests with a mixture of Cache (nvme), Array (spinning disks) and Vdisk's - both ones stored on nvme drives and stored on traditional hdd's - and I'm a little confused by the results:

 

What are these tests doing?

Without getting too into it, we're essentially taking 90, 400mb files and combining them into one 14gb file with a fair bit of CPU and GPU processing of various sorts in between - hence the large amount of RAM, 16GB GPU and high end CPU. Each time the process runs it uses around ~64gb of RAM and ~12GB of GPU VRAM, that just indicates the primary driver of the time difference is definitely the read/write.

 

R/ = read from
W/ = write to

===== VM Specs: 128gb ram | 5950x 20 threads | 6800xt =====
1 - R/vdsk-Cache        -      W/vdsk-Cache       -   4:30 per frame
2 - R/vdsk-Array        -      W/vdsk-Array       -   9:00 per frame
3 - R/vdsk-Array        -      W/share-Cache      -   10:30 per frame
4 - R/share-Cache       -      W/share-Cache      -   8:45 per frame
5 - R/share-Array       -      W/share-Cache      -   14:00 per frame
6 - R/share-Array       -      W/vdsk-Cache       -   12:10 per frame

 

Explaining the above a little further:

The optimal hardware arrangement seems to be (line 1)to read from, and write to, a VDISK located on the Cache pool. This results in a time of 4min 30 seconds. However, because VDISKS occupy their allocated space fully, this is not feasible as I would need a VDISK of approximately 12TB on NVME drives which would be quite costly.

 

Line 2 is the same setup but the VDISK is stored on the Array (spinning HDDs). This takes twice as long to complete, not bad, but it still requires a VDISK to permanently fill a significant portion of one of the HDDs in the Array.

 

Line 3 is where things gets confusing. By changing the output directory to the a share located on the Cache pool, I would expect the output time to improve, since the Cache should be significantly faster even with the emulated network adapter.

 

Lines 4-6 are further tests I did but these either are non-optimal settings, or produced slower results than other options which are more optimal, so they are just here for thoroughness.

 

--------------

 

Does anyone have insight as to why it might be slowing down in line 3 vs line 2? What is it about copying to a cache pool share that causes it to be so much slower than a VDISK on spinning disks?

 

 

Any insight is appreciated!

Edited by Conrad Allan
Improved formatting
Link to comment
7 hours ago, Conrad Allan said:

However, because VDISKS occupy their allocated space fully, this is not feasible as I would need a VDISK of approximately 12TB on NVME drives which would be quite costly

Because of that, trim won't actually wind up doing anything so write speed to a vdisk will always steadily degrade.

 

You want to do this to mitigate this: (the comment and link at the top should also fix you up easier)

 

Link to comment
  • 1 month later...

Just wanted to provide an update here. I ended up being pulled into another project for the last 6 weeks so I've been unable to test the above suggestions, however we fixed the issue by instead passing through a dedicated drive to the VM. This was the ultimate plan anyway, we figured it was easiest to just go that route.

 

Thanks for the help again!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...