Guide: Enable Trim on QEMU disk in MacOS/OSX


Recommended Posts

Tested on High Sierra and Mojave

 

I've been looking off and on about how to enable trim support on a disk image in osx. Had a few more minutes today and found it on the Internet (https://serverfault.com/questions/876467/how-to-add-virtual-storage-as-ssd-in-kvm)

 

 

Issue: QEMU disks in osx are presented to the OS in a manner which interprets them as a rotational disk, as shown under About This Mac>System Report>SATA/SATA EXPRESS.

 

2039511925_ScreenShot2019-06-08at4_39_36PM.png.b1b6f0e2cb2a8888c1ff5e883115ef07.png

 

 

Even after forcing trim on all disks via terminal, trim does not work, or even show it as an option. The result is the OS slows over time and disk images bloat.

 

 

To correct:

 

FOR 6.9.2 and below

 

(if you're worried about potential loss of data, borking a working vm, or other world ending scenarios, make a backup before doing this, and proceed at your own risk.)

 

 

With the VM shutdown, edit xml settings, changing the disk image info  from

 

<disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disks/K/G/vdisk.img'/>
      <target dev='hda' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>

 

to

 

<disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source file='/mnt/disks/K/G/vdisk.img'/>
      <target dev='hda' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>

 

with the changes only happening on the second line. (note: it may be possible to leave cache on write back and not use the io native setting, but I didn't experiment much, just followed working directions on the link)

 

make this change for any disk images you have that the vm uses.

 

next scroll to the bottom of the xml and add the following in the QEMU arguments

 

 

 <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.sata0-0-0.rotation_rate=1'/>
  </qemu:commandline>

 

any other arguments you have will also still need to be included. I do not know if order matters, but mine is at the end of the arguments list.

 

 

if you have any other drives, add an additional copy of the argument (both lines) and modify the "device.sata0-0-0.rotat...." accordingly to match your address type listed at the top with the disk image(s). If you only have one, then you can leave it as is, assuming you didn't change the address.

 

If you did this correctly, the vm will boot normally. But this time will display:

 

618410586_ScreenShot2019-06-08at5_01_16PM.png.5b6da84164e167a0567b043420cb133f.png

 

Recognized as an SSD but no trim support. To fix this, you must force trim on all drives. To do this, go to terminal and enter:

 

sudo trimforce enable

 

It will then give you some text that makes it seem like your computer will eat itself. 

 

248083689_ScreenShot2019-06-08at5_03_30PM.png.61ba4a0f3409c2d592a30d1da629be8f.png

 

 

The OS will then sit for a short bit, after which time it will reboot itself. After it restarts, to verify trim support is now enabled, go back to About This Mac>System Report>SATA/SATA EXPRESS.

 

1748504670_ScreenShot2019-06-08at5_05_27PM.png.268d580673f66d30cadde79d01c21bf9.png

 

 

 

 

6.10.0 RC1 and up

 

On 3/26/2022 at 4:57 AM, ghost82 said:

 

-------------------------------

 

replace this:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source file='/mnt/user/domains/macOS/macOS_disk.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

with this:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source file='/mnt/user/domains/macOS/macOS_disk.img'/>
      <target dev='hdc' bus='sata' rotation_rate='1'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

 

Check if the disk is recognized as ssd:

Boot mac os, go to About this mac --> System report... -> Sata

Select the controller and you will see "support type" and "TRIM support":

Support type should be solid state drive.

If trim support is "No", run a terminal in mac os and type 

sudo trimforce enable

 

 

 

 

Enjoy!

Edited by 1812
updated info for newer unraid os version
  • Like 4
Link to comment

This thread initially seemed bizarre to me, but now I think I get it; I'd appreciate any feedback on my understanding:

 

On a real SSD, as I understand things, TRIM speeds up some writes and reduces wear on the flash chips by telling the SSD which blocks are no longer in use by the file system, so that the SSD controller can more efficiently deal with them.

 

So...for a virtual qcow2 disk image, enabling TRIM helps keep the dynamically-allocated disk image from growing more quickly than necessary?

 

And, assuming I'm right about that...are there any other advantages?

Link to comment
2 hours ago, bland328 said:

n a real SSD, as I understand things, TRIM speeds up some writes and reduces wear on the flash chips by telling the SSD which blocks are no longer in use by the file system, so that the SSD controller can more efficiently deal with them.

I'm no expert, but yes, and it "takes out the trash" when you delete a file, vs not really deleting the file, requiring deletion next time the "drive" attempts to write to those blocks, causing degraded performance while the write function waits. And additionally, over time, the lack of trash collection always seems to slow my disk image files after about 4-6 months of use. Which makes sense because if macOS thinks its a rotational disk, the data remains after deletion but is just blind to the os, essentially filling up the image. and when the data actually hits the ssd, having to deal with all the existing occupied blocks causes excessive overhead on ssd operations to perform basic functions that involve writes.

 

2 hours ago, bland328 said:

So...for a virtual qcow2 disk image, enabling TRIM helps keep the dynamically-allocated disk image from growing more quickly than necessary?

along with the discard=unmap in the xml, yes is should, , which releases the removed blocks back to the ssd, vs having a static mapped section that may be going unused in the vm. 

 

2 hours ago, bland328 said:

And, assuming I'm right about that...are there any other advantages?

better wear-leveling, meaning it's not writing to the same section over and over and over and over, where the image resides, causing premature failure of those blocks/sectors. 

 

there may be others as well.

Edited by 1812
Link to comment
4 hours ago, testdasi said:

Does this only work with vdisk image? Would it work with a passed-through sata ssd (via device-id)?

 

it's been a while since I've passed an entire ssd to an osx vm. It would just depend on how macOS sees the drive. If it reads it w/out the quemu arg and other bits then you can check if trim is enabled or not (which I think for aftermarket drives you have to force enable anyways.) if its not recognized as an ssd, I don't know why it would not work, but again, I haven't tested it. I may have to do that just to see how it sees the disk.

 

Link to comment
  • 8 months later...
  • 4 months later...

This is really good, finally the host can sync the size of the vdisk of the guest!

Since most of us are using sparse image files for vdisks the vdisk size can grow and even if you delete files on the guest, the host doesn't sync, thinking the disk is full; with the modification of the cache/io/discard the host can now sync the size of the vdisk.

Without this modification I spent about a whole day to zero the vdisk and deduplicate to a new disk to the real size.

Edited by ghost82
Link to comment
  • 1 year later...
  • 2 weeks later...

I have just 1 vdisk setup using the macinabox SpaceInvaderOne guide. How do i get the alias of the disk? "<qemu:arg value='device.sata0-0-0.rotation_rate=1'/>" doesn't work for me since I don't have "alias name" function anywhere in my XML. I've tried "sata0-0-3", "sata0-0-0", "drive", "disk" and none seem to work

 

  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source file='/mnt/user/domains/macOS/macOS_disk.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>

 

Edited by Linguafoeda
Link to comment
14 hours ago, Linguafoeda said:

How do i get the alias of the disk?

Which version of unraid?

For 6.9.2 and below:

probably your alias is sata0-0-2, unraid gui masks it, I don't know why, so add this before </domain>

 <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.sata0-0-2.rotation_rate=1'/>
  </qemu:commandline>

 and make sure you have discard='unmap' in your disk block.

 

Check if the disk is recognized as ssd:

Boot mac os, go to About this mac --> System report... -> Sata

Select the controller and you will see "support type" and "TRIM support":

Support type should be solid state drive.

If trim support is "No", run a terminal in mac os and type 

sudo trimforce enable

 

-------------------------------

If you have unraid 6.10.0 RC1 and up,

replace this:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source file='/mnt/user/domains/macOS/macOS_disk.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

with this:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source file='/mnt/user/domains/macOS/macOS_disk.img'/>
      <target dev='hdc' bus='sata' rotation_rate='1'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

 

Check if the disk is recognized as ssd:

Boot mac os, go to About this mac --> System report... -> Sata

Select the controller and you will see "support type" and "TRIM support":

Support type should be solid state drive.

If trim support is "No", run a terminal in mac os and type 

sudo trimforce enable

 

In both cases, if you have issues attach diagnostics.

Edited by ghost82
  • Like 1
Link to comment

great, sata0-0-2 worked! i enabled the XML edit, enabled trim and below is what system report shows. My macOS vdisk.img is still being reported as 100GiB in the domains folder though

 

I ran command log show --start $(date +%F) | grep -i spaceman_trim_free_blocks and nothing showed up. Essentially - I think TRIM is enabled but not running at boot?

 

image.png.d12a3abe953def9719d43307f7bd0877.png

 

my XML now:

  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source file='/mnt/user/domains/macOS/macOS_disk.img' index='1'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    ...
     <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.sata0-0-2.rotation_rate=1'/>
    <qemu:arg value='-usb'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-kbd,bus=usb-bus.0'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='************************'/>
    <qemu:arg value='-smbios'/>
    <qemu:arg value='type=2'/>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/>
  </qemu:commandline>
</domain>

 

Edited by Linguafoeda
Link to comment
6 hours ago, Linguafoeda said:

I see this screen on my VM when attempting to shut down - it's maxing out the CPU. Is this related to trying to run the trim command?

No, it has nothing to do with trim.

Try to boot into recovery, keep '0' pressed on vm boot (if you don't see the bootloader gui - opencanopy) and choose 'recovery'; once booted into recovery choose disk utility; select 'show all disks/devices', select each container and run repair, such as in the following picture:

1528255027_Bildschirmfoto2021-11-24um19_41_27.thumb.png.46ece7299d7d314e54014c0192117947.png

 

Edited by ghost82
Link to comment
15 hours ago, Linguafoeda said:

My macOS vdisk.img is still being reported as 100GiB in the domains folder though

From the log I can't see anything unusual for trim.

kernel: (apfs) spaceman_scan_free_blocks:3154: disk1 scan took 3.423224 s, trims took 3.198372 s

The reported 100GB is correct, but there is virtual disk size and disk size (you have a sparse image).

To double check and see if trim works:

1. shutdown the vm

2. open unraid terminal and run:

qemu-img info /mnt/user/domains/macOS/macOS_disk.img

Virtual size should be about 100 GB

Write down or memorize the 'disk size'

3. run the vm

4. copy to the vm a 'large file', such as a 4-5 GB file and shutdown the vm

5. run again from unraid terminal the same command:

Virtual size will be always the value as before.

Disk size should be increased by 4-5 GB

6. run the vm and delete the 4-5 GB file you copied before, shutdown the vm

7. run again from unraid terminal the same command:

Virtual size will be always the value as before.

Disk size should be decreased by 4-5 GB, same value as step 2.

 

Trim is needed to sync the free space from the guest to the host, if trim doesn't work, the non free space will continue to increase on the host side, even if you free the space on the guest side.

Edited by ghost82
Link to comment

This is my results from the above steps but the space taken up on the SSD itself is still 100GiB. Does the disk size being ~30GB vs. 100GB virtual size (which is fixed) never get reflected in usable space to the actual SSD (maybe I misunderstood this part, I thought that what we were actually solving similar to how my Windows vdisk only shows up as ~36GiB used)?

 

Step 1: Initial Setup
root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img
image: /mnt/user/domains/macOS/macOS_disk.img
file format: raw
virtual size: 100 GiB (107374182400 bytes)
disk size: 32.3 GiB

Step 2: Copied 3.4GiB movie file
root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img
image: /mnt/user/domains/macOS/macOS_disk.img
file format: raw
virtual size: 100 GiB (107374182400 bytes)
disk size: 35.8 GiB

Step 3: delete 3.4GiB movie file
root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img
image: /mnt/user/domains/macOS/macOS_disk.img
file format: raw
virtual size: 100 GiB (107374182400 bytes)
disk size: 32.2 GiB

 

image.png.db15e6ec3d5a8f3cbdeb4421f8b45798.png

 

image.png.bb4fadafe0ec456436b2801221e04819.png

Link to comment

I deleted my post above because I'm not sure if what I wrote was correct.

Anyway, apart the GB values you are getting in the gui (which I don't know from where they come from), you need to consider the 'disk size' value reported by 'qemu-img info' as the real size occupied on the disk.

Look at the test I did:

- physical disk is a 6 TB hd

- I created a raw img hd (test1.img) with 'qemu-img create' with a size of 8000G (8 TB) --> this is the virtual size, I did it on purpose to exceed the size of the physical disk, and formatted with gpt partition table, linux filesystem

- ls -la shows the 8 TB size (virtual size), but du test1.img shows the real size (40 Kb), qemu-img info reports the same (real) disk size

1a.png.fcb7c0a1f293508b358657316083bbe8.png

 

2a.png.6e54deb11585d9c44aa6d913e68dcff9.png

 

3a.png.29c4b913101b346fae59311f15c407e4.png

 

Only 40 Kb are really occupied, obviously not 8 TB.

Only when you start to write to the vdisk space is allocated to the real disk; when you remove something from the vdisk if trim/discard is enabled then the space is unallocated again (host syncs with the guest); this doesn't happen if trim/discard is not enabled.

 

As far as trim/discard is enabled for the vdisk, the real occupied size on the physical hd is that reported by du or qemu-img info (disk size)

 

Can you run

qemu-img info /path/to/windowsvdisk.img

 

on the windows vdisk and report the output please?

My thought is that the windows vdisk has a virtual disk size of 36 GB..

Edited by ghost82
Link to comment

Here's my Mac and Windows disk side by side (I just ran CleanMyMac to further clean up the mac partition down to 26.5GiB). The reason I think it's actually taking up the 100GiB space is that everywhere that my cache drive (NVMe SSD) reports used vs. free space, it is assuming the full 100GiB from the mac vdisk. For example, my usr/mnt/cache only has 5 folders, appdata is ~68GiB, system is 1GiB,, ISOs is 0GiB, Personal is ~0GiB, and that leaves my domains folder which has just two folders & files (Windows_disk.img is 36GiB + macOS_disk.img 100GiB). 68 + 1 + 0 + 0 + 36 + 100 = ~205GiB which is what Krusader and Unraid main tab show as "space used". 

 

root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img
image: /mnt/user/domains/macOS/macOS_disk.img
file format: raw
virtual size: 100 GiB (107374182400 bytes)
disk size: 26.5 GiB

root@:~# qemu-img info /mnt/user/domains/Windows/Windows_disk.img
image: /mnt/user/domains/Windows/Windows_disk.img
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 36 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

 

image.thumb.png.741aa5cf5302e34e7f8997942387d5f4.png

 

root@:/mnt/user/domains# cd /mnt/user/domains
root@:/mnt/user/domains# ls -la
total 0
drwxrwxrwx 1 nobody users 24 Mar 23 00:10 ./
drwxrwxrwx 1 nobody users 20 Mar 27 04:30 ../
drwxrwxrwx 1 nobody users 32 Mar 23 00:10 Windows/
drwxrwxrwx 1 nobody users 28 Mar 22 23:05 macOS/
root@:/mnt/user/domains# cd macOS
root@:/mnt/user/domains/macOS# ls -la
total 27793620
drwxrwxrwx 1 nobody users           28 Mar 22 23:05 ./
drwxrwxrwx 1 nobody users           24 Mar 23 00:10 ../
-rw-rw-rw- 1 nobody users 107374182400 Mar 27 07:06 macOS_disk.img

 

Edited by Linguafoeda
Link to comment
20 minutes ago, Linguafoeda said:

it is assuming the full 100GiB from the mac vdisk.

I don't think so, my test is pretty clear, you read a value but we need to know from what command it is derived. In my opinion it's only a cosmetic issue.

The windows vm uses a qcow2 image not an img, even if the extension is .img, that's why values are different.

If you want you can convert the img to qcow2, but it will change near nothing about the real free space, apart fixing that cosmetic issue.

Edited by ghost82
Link to comment

Hmm interesting. Well the command I was basing my assumption on it actually taking up 100GiB was how 1. Krusader (docker app), 2. mc (blue terminal window above) and 3. Unraid GUI were reporting used space.

 

Is there any risk to converting to qcow over raw? If there's no downside and only benefit i.e. fixing cosmetic issue at the least, I'd like to do that if possible if you could point me in the right direction

Edited by Linguafoeda
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.