9p sharing speed not what I expected...


johnodon

Recommended Posts

Another item to try boosting your speeds is to apply the 'no copy on write' flag to the directory where you will have your VM images at on your BTRFS.

 

To do this use the following command, chattr +C. Change this to be suitable for your setup. For mine I have a cache only share called 'domains' that has two storage pools under it, one for media .iso files (/media) and one for the actual VM disks (/images).

 

For the directories:

chattr +C /mnt/cache/domains /mnt/cache/domains/images /mnt/cache/domains/media

 

Woops, it seems like you have to do something more extensive to fix it on existing files:

 

In accordance with the note above, you can use the following trick to disable CoW on existing files in a directory:

 

$ mv /path/to/dir /path/to/dir_old

$ mkdir /path/to/dir

$ chattr +C /path/to/dir

$ cp -a /path/to/dir_old/* /path/to/dir

$ rm -rf /path/to/dir_old

 

Make sure that the data are not used during this process. Also note that mv or cp --reflink as described below will not work.

 

I did that the other day after jonp's post. It will only work on 0 byte files (when I create a qcow it's not 0 bytes) and if you change it on a directory only new files to that directory.  It won't change existing files like you discovered. So to put it in words if you copy your qcow to another directory then back again then the C flag is set.

 

So its best to set it on a directory first then create or copy files (move won't work either).

 

It still had no effect for me. I  have 2 ssd's.  I have one xfs and one still btrfs. I only get 25MB's copying a 2GB file to the vm through 9p. I get 35-40 copying from the vm through 9p and about 12MB/s from ssd to ssd through 9p.  Copying from ssd to ssd  I get 280/s to xfs and about 200 to btrfs.

 

I'm not really putting much into this since everything works fine.

 

Link to comment

Another item to try boosting your speeds is to apply the 'no copy on write' flag to the directory where you will have your VM images at on your BTRFS.

 

To do this use the following command, chattr +C. Change this to be suitable for your setup. For mine I have a cache only share called 'domains' that has two storage pools under it, one for media .iso files (/media) and one for the actual VM disks (/images).

 

For the directories:

chattr +C /mnt/cache/domains /mnt/cache/domains/images /mnt/cache/domains/media

 

Woops, it seems like you have to do something more extensive to fix it on existing files:

 

In accordance with the note above, you can use the following trick to disable CoW on existing files in a directory:

 

$ mv /path/to/dir /path/to/dir_old

$ mkdir /path/to/dir

$ chattr +C /path/to/dir

$ cp -a /path/to/dir_old/* /path/to/dir

$ rm -rf /path/to/dir_old

 

Make sure that the data are not used during this process. Also note that mv or cp --reflink as described below will not work.

This would be very unlikely to help with virtfs performance during bulky file transfers.

Link to comment

Another item to try boosting your speeds is to apply the 'no copy on write' flag to the directory where you will have your VM images at on your BTRFS.

 

To do this use the following command, chattr +C. Change this to be suitable for your setup. For mine I have a cache only share called 'domains' that has two storage pools under it, one for media .iso files (/media) and one for the actual VM disks (/images).

 

For the directories:

chattr +C /mnt/cache/domains /mnt/cache/domains/images /mnt/cache/domains/media

 

Woops, it seems like you have to do something more extensive to fix it on existing files:

 

In accordance with the note above, you can use the following trick to disable CoW on existing files in a directory:

 

$ mv /path/to/dir /path/to/dir_old

$ mkdir /path/to/dir

$ chattr +C /path/to/dir

$ cp -a /path/to/dir_old/* /path/to/dir

$ rm -rf /path/to/dir_old

 

Make sure that the data are not used during this process. Also note that mv or cp --reflink as described below will not work.

This would be very unlikely to help with virtfs performance during bulky file transfers.

 

Perhaps, but at least it's something to try before going through the hassle of wiping out one's existing cache drive (app drive) to switch from BTRFS to XFS.

Link to comment

Another item to try boosting your speeds is to apply the 'no copy on write' flag to the directory where you will have your VM images at on your BTRFS.

 

To do this use the following command, chattr +C. Change this to be suitable for your setup. For mine I have a cache only share called 'domains' that has two storage pools under it, one for media .iso files (/media) and one for the actual VM disks (/images).

 

For the directories:

chattr +C /mnt/cache/domains /mnt/cache/domains/images /mnt/cache/domains/media

 

Woops, it seems like you have to do something more extensive to fix it on existing files:

 

In accordance with the note above, you can use the following trick to disable CoW on existing files in a directory:

 

$ mv /path/to/dir /path/to/dir_old

$ mkdir /path/to/dir

$ chattr +C /path/to/dir

$ cp -a /path/to/dir_old/* /path/to/dir

$ rm -rf /path/to/dir_old

 

Make sure that the data are not used during this process. Also note that mv or cp --reflink as described below will not work.

This would be very unlikely to help with virtfs performance during bulky file transfers.

 

Perhaps, but at least it's something to try before going through the hassle of wiping out one's existing cache drive (app drive) to switch from BTRFS to XFS.

What does that have to do with 9p?  Should work on multiple filesystem types.

Link to comment

Another item to try boosting your speeds is to apply the 'no copy on write' flag to the directory where you will have your VM images at on your BTRFS.

 

To do this use the following command, chattr +C. Change this to be suitable for your setup. For mine I have a cache only share called 'domains' that has two storage pools under it, one for media .iso files (/media) and one for the actual VM disks (/images).

 

For the directories:

chattr +C /mnt/cache/domains /mnt/cache/domains/images /mnt/cache/domains/media

 

Woops, it seems like you have to do something more extensive to fix it on existing files:

 

In accordance with the note above, you can use the following trick to disable CoW on existing files in a directory:

 

$ mv /path/to/dir /path/to/dir_old

$ mkdir /path/to/dir

$ chattr +C /path/to/dir

$ cp -a /path/to/dir_old/* /path/to/dir

$ rm -rf /path/to/dir_old

 

Make sure that the data are not used during this process. Also note that mv or cp --reflink as described below will not work.

This would be very unlikely to help with virtfs performance during bulky file transfers.

 

Perhaps, but at least it's something to try before going through the hassle of wiping out one's existing cache drive (app drive) to switch from BTRFS to XFS.

What does that have to do with 9p?  Should work on multiple filesystem types.

 

I thought you were following the discussion, especially since you were one who mentioned how having multiple layers of Copy On Write is likely not as efficient as it should be (Original Post hinting at such a thing here).

 

The settings that one does is on the cache drive (BTRFS) directory to turn off BTRFS's Copy on Writes for the VM Images. As you already pointed out, having one's VM Images marked as GCOW / GCOW2 on top of a filesystem (BTRFS) that has it's own Copy On Writes functionality isn't nearly as efficient. Thus, what we're talking about ... Turning off the BTRFS Copy on Write's function for the Virtual Machines image files; that is what the chattr +C does for BTRFS.

 

The alternatives to not have multiple layers of copy-on-writes functionality interfering with one another, is:

[*]have your VM image be RAW and thus not have Copy on Writes

[*]have your host filesystem be XFS or one without Copy on Write functionality if you're using COW/GCOW2 VM images

[*]have your host filesystem not have Copy On Write, by setting 'chattr +C' on the directory and recopying your VM images, if you're using COW/GCOW2 VM images

 

The 'chattr +C' steps would be done on your dom-0 unRAID cache/app drive if the user is running BTRFS.

 

This is just another step to try to fine tune one's unRAID VM image setups, especially if the user is running BTRFS on their cache/app drive.

 

It's not so much specific to "9p" but more specific to fine tuning and optimizing one's VM setups, afterall that is why one would use "9p" -- to optimize their setups.

 

I hope that clears up any possible confusions.

 

 

 

 

Link to comment

I thought you were following the discussion, especially since you were one who mentioned how having multiple layers of Copy On Write is likely not as efficient as it should be (Original Post hinting at such a thing here).

 

Be careful with the use of works such as "likely" when speaking on a subject like this.  There are a number of scenarios where CoW will not affect write performance.  E.g. writing new files doesn't affect CoW, only modification of existing ones.  In addition, I have not seen performance degradation in my usage scenarios yet and I have been using VMs with CoW on a BTRFS RAID 1 filesystem with 3 disks for over 6 months now.  There are definitely situations where CoW could impact performance, but bulk file transfers through VirtFS is unlikely to be one of them.

 

The settings that one does is on the cache drive (BTRFS) directory to turn off BTRFS's Copy on Writes for the VM Images. As you already pointed out, having one's VM Images marked as GCOW / GCOW2 on top of a filesystem (BTRFS) that has it's own Copy On Writes functionality isn't nearly as efficient.

 

Where is your evidence for this?  I simply suggested this MAY be true in SOME situations, but shouldn't be read as a universal truth.  As many in this thread have already indicated, this didn't impact their performance tests for write speed.

 

hus, what we're talking about ... Turning off the BTRFS Copy on Write's function for the Virtual Machines image files; that is what the chattr +C does for BTRFS.

 

The alternatives to not have multiple layers of copy-on-writes functionality interfering with one another, is:

[*]have your VM image be RAW and thus not have Copy on Writes

[*]have your host filesystem be XFS or one without Copy on Write functionality if you're using COW/GCOW2 VM images

[*]have your host filesystem not have Copy On Write, by setting 'chattr +C' on the directory and recopying your VM images, if you're using COW/GCOW2 VM images

 

Yes, these are alternative approaches, but again, this thread is about 9p write performance, not VM performance.  These are two separate subjects for completely individual discussion.

 

The 'chattr +C' steps would be done on your dom-0 unRAID cache/app drive if the user is running BTRFS.

 

This is just another step to try to fine tune one's unRAID VM image setups, especially if the user is running BTRFS on their cache/app drive.

 

It's not so much specific to "9p" but more specific to fine tuning and optimizing one's VM setups, afterall that is why one would use "9p" -- to optimize their setups.

 

I hope that clears up any possible confusions.

 

The confusion was only in your suggestion in relation to 9p performance.  9p usage in unRAID is implemented in a way to make it easy to gain filesystem access between host and guest, when the guest is a KVM instance.  The performance of VirtFS may vary from system to system and may vary from use-case to use-case.  We are still early in it's adoption within unRAID and folks are still testing it out.  For our usage in OpenELEC, it is ideal because the performance should always be satisfactory from a media-playback perspective.  If it isn't, we would suggest folks revert to using VirtIO methods (virtual network) and connect over SMB or NFS. 

 

VirtFS removes a layer of complexity by eliminating the need for virtualized networking.  This is simplification at its finest.

 

I don't mind the feedback on VM performance, but would encourage further testing before we rule out CoW's usage.  There are a number of articles online suggesting CoW performance is bad for VMs in BTRFS and the KVM site itself says BTRFS is no-good for KVM, but I don't know how outdated that is.  Keep in mind, performance is a subjective term.  Some measure performance by seek time, others by latency, others by read / write speed (MB/s), and then there are all the in-between scenarios like different write sizes (in blocks), etc.  With something like a VM, a lot of developers are testing scenarios with usage in databases and web serving for cloud applications.  These are systems that have highly-concurrent read/write needs and they are of very small file sizes.  This is where dual CoW (btrfs / qcow) is at its worst (fyi, it's not GCOW, it's QCOW; but maybe that's a keyboard nationality thing?).  This is not the atypical usage of an unRAID system.  It's in your home or office where a selected number of individuals have trusted-access to the system.  Gigantic concurrent workload databases are not in the wheelhouse of testing in unRAID right now.

 

Anyhow, just wanted to clarify that I'm following this thread as I have interest in the results of testing, but don't want to see people get distracted down testing paths that I know have little chance of bearing fruit when other testing could be more beneficial to gather in "vanilla" usage.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.