Who is running a 2 in 1 server?


Dimtar

Recommended Posts

Quote

The only way to really test is the VM to the Cache.  Everything else will be limited by the write operations of parity.

So is 40~70 MB/s about the fastest I can expect short of using the cache pool? Parity checks/data rebuild are 2~3x as fast.

 

Quote

Your network will not matter, if this is all local from a VM to the array then it will use a virtual adapter and not go over the wire.

That's really cool!!

Link to comment
10 minutes ago, Joseph said:

So is 40~70 MB/s about the fastest I can expect short of using the cache pool? Parity checks/data rebuild are 2~3x as fast.

 

That's really cool!!

 

I don't think so, those speeds sound like you are pushing directly to the array rather than to the cache disk.  When you do the file copy, do you see the Parity disk writes incrementing?

 

Did you mount the user0 version of the folder?

Is "use cache" enabled for the share you are using?

Are you using a disk share to test? (This would bypass the cache disk and result in direct writes to the protected array).

 

 

Link to comment
On 1/18/2018 at 2:48 PM, SSD said:

If there is one thing you want on an SSD cache it is the VM image.

 

6 minutes ago, Tybio said:

 

I don't think so, those speeds sound like you are pushing directly to the array rather than to the cache disk.  When you do the file copy, do you see the Parity disk writes incrementing?

 

Did you mount the user0 version of the folder?

Is "use cache" enabled for the share you are using?

Are you using a disk share to test? (This would bypass the cache disk and result in direct writes to the protected array).

 

 

I recently moved the VM to the cache pool, there's not as much space as it once had. So I have it turned off for some shares because when the pool gets near to full, the VM pauses. Also, it seems like unRAID doesn't preemptively switch from cache to the array; if there's a file being copied that is larger than the space available on the cache, I get an out of space error. After some space is freed, I can un-pause the VM, but splashtop won't communicate with it until the VM is rebooted.

Link to comment

Ok, the limitation is the writes to the array, not the VM.  What I did was get a SSD and mount it outside the array with the unassigned devices plugin, that left my Cache free for use and the VM can move files /much/ more quickly to the array. 

 

That doesn't explain why the reads from the array to the VM are so slow, might need someone with a bit more experience to chime in at this point.   It feels like there is something simple going on, just having trouble seeing it ATM :)

Link to comment
On 1/18/2018 at 12:48 PM, SSD said:

If there is one thing you want on an SSD cache it is the VM image. 

 

I do the same thing. Just use the default /mnt/cache/domains/vmFolderName/vdisk.img install my OS, updates, drivers, 3rd party utils -- defaults I want; then shut it down and cp the image file to a backup folder on my array. Something happens to vm I have a clean image of OS, same goes if something happens to my cache drive. Also makes doing 2 in 1 systems easy; just do the same thing only NOT graphics card driver -- this becomes your template. Copy template image to a new cache VM folder, assign gpu and boot, now load correct drivers for gpu. Second VM, different CPU pinnings, another copy of template image to a another cache vm folder (or different file name, whatever), assign next gpu, yada yada ... 

 

 

 

 

Edited by Jcloud
Link to comment
1 hour ago, Tybio said:

Ok, the limitation is the writes to the array, not the VM.  What I did was get a SSD and mount it outside the array with the unassigned devices plugin, that left my Cache free for use and the VM can move files /much/ more quickly to the array. 

 

That doesn't explain why the reads from the array to the VM are so slow, might need someone with a bit more experience to chime in at this point.   It feels like there is something simple going on, just having trouble seeing it ATM :)

Thanks man.... I think the interim solution is to suck it up and buy a bigger SSD cache pool. At some point I do need to try to figure out what's going on with the overall array I/O -- doesn't make sense why its slow (again, just my opinion based on earlier posts) -- but should probably move that discussion to another topic in the support community. At some point in the near future I'm going to want to build a backup unRAID; 40~70MB/s box-to-box isn't going to be practical.

 

49 minutes ago, Jcloud said:

 

I do the same thing. Just use the default /mnt/cache/domains/vmFolderName/vdisk.img....  Something happens to vm I have a clean image of OS, same goes if something happens to my cache drive.

Yeah, that's what I've done....backup image placed in the previous share where the original used to reside.

Link to comment
  • 9 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.