1812 Posted June 8, 2019 Share Posted June 8, 2019 (edited) Tested on High Sierra and Mojave I've been looking off and on about how to enable trim support on a disk image in osx. Had a few more minutes today and found it on the Internet (https://serverfault.com/questions/876467/how-to-add-virtual-storage-as-ssd-in-kvm) Issue: QEMU disks in osx are presented to the OS in a manner which interprets them as a rotational disk, as shown under About This Mac>System Report>SATA/SATA EXPRESS. Even after forcing trim on all disks via terminal, trim does not work, or even show it as an option. The result is the OS slows over time and disk images bloat. To correct: FOR 6.9.2 and below (if you're worried about potential loss of data, borking a working vm, or other world ending scenarios, make a backup before doing this, and proceed at your own risk.) With the VM shutdown, edit xml settings, changing the disk image info from <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/K/G/vdisk.img'/> <target dev='hda' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> to <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/disks/K/G/vdisk.img'/> <target dev='hda' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> with the changes only happening on the second line. (note: it may be possible to leave cache on write back and not use the io native setting, but I didn't experiment much, just followed working directions on the link) make this change for any disk images you have that the vm uses. next scroll to the bottom of the xml and add the following in the QEMU arguments <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-0.rotation_rate=1'/> </qemu:commandline> any other arguments you have will also still need to be included. I do not know if order matters, but mine is at the end of the arguments list. if you have any other drives, add an additional copy of the argument (both lines) and modify the "device.sata0-0-0.rotat...." accordingly to match your address type listed at the top with the disk image(s). If you only have one, then you can leave it as is, assuming you didn't change the address. If you did this correctly, the vm will boot normally. But this time will display: Recognized as an SSD but no trim support. To fix this, you must force trim on all drives. To do this, go to terminal and enter: sudo trimforce enable It will then give you some text that makes it seem like your computer will eat itself. The OS will then sit for a short bit, after which time it will reboot itself. After it restarts, to verify trim support is now enabled, go back to About This Mac>System Report>SATA/SATA EXPRESS. 6.10.0 RC1 and up On 3/26/2022 at 4:57 AM, ghost82 said: ------------------------------- replace this: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/macOS/macOS_disk.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> with this: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/macOS/macOS_disk.img'/> <target dev='hdc' bus='sata' rotation_rate='1'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> Check if the disk is recognized as ssd: Boot mac os, go to About this mac --> System report... -> Sata Select the controller and you will see "support type" and "TRIM support": Support type should be solid state drive. If trim support is "No", run a terminal in mac os and type sudo trimforce enable Enjoy! Edited March 27, 2022 by 1812 updated info for newer unraid os version 4 Quote Link to comment
Jagadguru Posted June 9, 2019 Share Posted June 9, 2019 Thank you that was great. It worked for me. Quote Link to comment
david279 Posted June 9, 2019 Share Posted June 9, 2019 Ran this on my mojave qcow2 vdisk and works. qcow2 disk start small by default but without being trimmed they grow as well so this should keep the size down. Quote Link to comment
bland328 Posted June 12, 2019 Share Posted June 12, 2019 This thread initially seemed bizarre to me, but now I think I get it; I'd appreciate any feedback on my understanding: On a real SSD, as I understand things, TRIM speeds up some writes and reduces wear on the flash chips by telling the SSD which blocks are no longer in use by the file system, so that the SSD controller can more efficiently deal with them. So...for a virtual qcow2 disk image, enabling TRIM helps keep the dynamically-allocated disk image from growing more quickly than necessary? And, assuming I'm right about that...are there any other advantages? Quote Link to comment
1812 Posted June 13, 2019 Author Share Posted June 13, 2019 (edited) 2 hours ago, bland328 said: n a real SSD, as I understand things, TRIM speeds up some writes and reduces wear on the flash chips by telling the SSD which blocks are no longer in use by the file system, so that the SSD controller can more efficiently deal with them. I'm no expert, but yes, and it "takes out the trash" when you delete a file, vs not really deleting the file, requiring deletion next time the "drive" attempts to write to those blocks, causing degraded performance while the write function waits. And additionally, over time, the lack of trash collection always seems to slow my disk image files after about 4-6 months of use. Which makes sense because if macOS thinks its a rotational disk, the data remains after deletion but is just blind to the os, essentially filling up the image. and when the data actually hits the ssd, having to deal with all the existing occupied blocks causes excessive overhead on ssd operations to perform basic functions that involve writes. 2 hours ago, bland328 said: So...for a virtual qcow2 disk image, enabling TRIM helps keep the dynamically-allocated disk image from growing more quickly than necessary? along with the discard=unmap in the xml, yes is should, , which releases the removed blocks back to the ssd, vs having a static mapped section that may be going unused in the vm. 2 hours ago, bland328 said: And, assuming I'm right about that...are there any other advantages? better wear-leveling, meaning it's not writing to the same section over and over and over and over, where the image resides, causing premature failure of those blocks/sectors. there may be others as well. Edited June 13, 2019 by 1812 Quote Link to comment
testdasi Posted June 13, 2019 Share Posted June 13, 2019 Does this only work with vdisk image? Would it work with a passed-through sata ssd (via device-id)? Quote Link to comment
1812 Posted June 13, 2019 Author Share Posted June 13, 2019 4 hours ago, testdasi said: Does this only work with vdisk image? Would it work with a passed-through sata ssd (via device-id)? it's been a while since I've passed an entire ssd to an osx vm. It would just depend on how macOS sees the drive. If it reads it w/out the quemu arg and other bits then you can check if trim is enabled or not (which I think for aftermarket drives you have to force enable anyways.) if its not recognized as an ssd, I don't know why it would not work, but again, I haven't tested it. I may have to do that just to see how it sees the disk. Quote Link to comment
bjornatic Posted February 25, 2020 Share Posted February 25, 2020 I've just did the procedure (successfully) on a Catalina VM with APFS formated vdisk.img sitting on a SSD (nVme). But, is this command (sudo trimforce enable) still relevant in this situation ? I'm reading confusing things about this and because its a VM, I'm even more clueless. Quote Link to comment
ghost82 Posted June 28, 2020 Share Posted June 28, 2020 (edited) This is really good, finally the host can sync the size of the vdisk of the guest! Since most of us are using sparse image files for vdisks the vdisk size can grow and even if you delete files on the guest, the host doesn't sync, thinking the disk is full; with the modification of the cache/io/discard the host can now sync the size of the vdisk. Without this modification I spent about a whole day to zero the vdisk and deduplicate to a new disk to the real size. Edited June 28, 2020 by ghost82 Quote Link to comment
ghost82 Posted March 17, 2022 Share Posted March 17, 2022 (edited) This needs to be updated for unraid 6.10.0 RC1 and up, see here: https://forums.unraid.net/topic/51703-vm-faq/?do=findComment&comment=1034668 Edited March 17, 2022 by ghost82 Quote Link to comment
Linguafoeda Posted March 25, 2022 Share Posted March 25, 2022 (edited) I have just 1 vdisk setup using the macinabox SpaceInvaderOne guide. How do i get the alias of the disk? "<qemu:arg value='device.sata0-0-0.rotation_rate=1'/>" doesn't work for me since I don't have "alias name" function anywhere in my XML. I've tried "sata0-0-3", "sata0-0-0", "drive", "disk" and none seem to work <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/macOS/macOS_disk.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> Edited March 25, 2022 by Linguafoeda Quote Link to comment
ghost82 Posted March 26, 2022 Share Posted March 26, 2022 (edited) 14 hours ago, Linguafoeda said: How do i get the alias of the disk? Which version of unraid? For 6.9.2 and below: probably your alias is sata0-0-2, unraid gui masks it, I don't know why, so add this before </domain> <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-2.rotation_rate=1'/> </qemu:commandline> and make sure you have discard='unmap' in your disk block. Check if the disk is recognized as ssd: Boot mac os, go to About this mac --> System report... -> Sata Select the controller and you will see "support type" and "TRIM support": Support type should be solid state drive. If trim support is "No", run a terminal in mac os and type sudo trimforce enable ------------------------------- If you have unraid 6.10.0 RC1 and up, replace this: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/macOS/macOS_disk.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> with this: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/macOS/macOS_disk.img'/> <target dev='hdc' bus='sata' rotation_rate='1'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> Check if the disk is recognized as ssd: Boot mac os, go to About this mac --> System report... -> Sata Select the controller and you will see "support type" and "TRIM support": Support type should be solid state drive. If trim support is "No", run a terminal in mac os and type sudo trimforce enable In both cases, if you have issues attach diagnostics. Edited March 26, 2022 by ghost82 1 Quote Link to comment
Linguafoeda Posted March 26, 2022 Share Posted March 26, 2022 (edited) great, sata0-0-2 worked! i enabled the XML edit, enabled trim and below is what system report shows. My macOS vdisk.img is still being reported as 100GiB in the domains folder though I ran command log show --start $(date +%F) | grep -i spaceman_trim_free_blocks and nothing showed up. Essentially - I think TRIM is enabled but not running at boot? my XML now: <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/macOS/macOS_disk.img' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> ... <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-2.rotation_rate=1'/> <qemu:arg value='-usb'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='************************'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/> </qemu:commandline> </domain> Edited March 27, 2022 by Linguafoeda Quote Link to comment
Linguafoeda Posted March 27, 2022 Share Posted March 27, 2022 (edited) I see this screen on my VM when attempting to shut down - it's maxing out the CPU. Is this related to trying to run the trim command? Here is the terminal log file for the spaceman command https://gist.github.com/jeff15110168/4949afeb29f4d773b6c2874e0ca60547 Edited March 27, 2022 by Linguafoeda Quote Link to comment
ghost82 Posted March 27, 2022 Share Posted March 27, 2022 (edited) 6 hours ago, Linguafoeda said: I see this screen on my VM when attempting to shut down - it's maxing out the CPU. Is this related to trying to run the trim command? No, it has nothing to do with trim. Try to boot into recovery, keep '0' pressed on vm boot (if you don't see the bootloader gui - opencanopy) and choose 'recovery'; once booted into recovery choose disk utility; select 'show all disks/devices', select each container and run repair, such as in the following picture: Edited March 27, 2022 by ghost82 Quote Link to comment
ghost82 Posted March 27, 2022 Share Posted March 27, 2022 (edited) 15 hours ago, Linguafoeda said: My macOS vdisk.img is still being reported as 100GiB in the domains folder though From the log I can't see anything unusual for trim. kernel: (apfs) spaceman_scan_free_blocks:3154: disk1 scan took 3.423224 s, trims took 3.198372 s The reported 100GB is correct, but there is virtual disk size and disk size (you have a sparse image). To double check and see if trim works: 1. shutdown the vm 2. open unraid terminal and run: qemu-img info /mnt/user/domains/macOS/macOS_disk.img Virtual size should be about 100 GB Write down or memorize the 'disk size' 3. run the vm 4. copy to the vm a 'large file', such as a 4-5 GB file and shutdown the vm 5. run again from unraid terminal the same command: Virtual size will be always the value as before. Disk size should be increased by 4-5 GB 6. run the vm and delete the 4-5 GB file you copied before, shutdown the vm 7. run again from unraid terminal the same command: Virtual size will be always the value as before. Disk size should be decreased by 4-5 GB, same value as step 2. Trim is needed to sync the free space from the guest to the host, if trim doesn't work, the non free space will continue to increase on the host side, even if you free the space on the guest side. Edited March 27, 2022 by ghost82 Quote Link to comment
Linguafoeda Posted March 27, 2022 Share Posted March 27, 2022 This is my results from the above steps but the space taken up on the SSD itself is still 100GiB. Does the disk size being ~30GB vs. 100GB virtual size (which is fixed) never get reflected in usable space to the actual SSD (maybe I misunderstood this part, I thought that what we were actually solving similar to how my Windows vdisk only shows up as ~36GiB used)? Step 1: Initial Setup root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img image: /mnt/user/domains/macOS/macOS_disk.img file format: raw virtual size: 100 GiB (107374182400 bytes) disk size: 32.3 GiB Step 2: Copied 3.4GiB movie file root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img image: /mnt/user/domains/macOS/macOS_disk.img file format: raw virtual size: 100 GiB (107374182400 bytes) disk size: 35.8 GiB Step 3: delete 3.4GiB movie file root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img image: /mnt/user/domains/macOS/macOS_disk.img file format: raw virtual size: 100 GiB (107374182400 bytes) disk size: 32.2 GiB Quote Link to comment
Linguafoeda Posted March 27, 2022 Share Posted March 27, 2022 Hmm okay. Do you know if there is a way to convert my disk to get this option back? Or otherwise get the ability to have a dynamically sized vdisk? Quote Link to comment
ghost82 Posted March 27, 2022 Share Posted March 27, 2022 (edited) I deleted my post above because I'm not sure if what I wrote was correct. Anyway, apart the GB values you are getting in the gui (which I don't know from where they come from), you need to consider the 'disk size' value reported by 'qemu-img info' as the real size occupied on the disk. Look at the test I did: - physical disk is a 6 TB hd - I created a raw img hd (test1.img) with 'qemu-img create' with a size of 8000G (8 TB) --> this is the virtual size, I did it on purpose to exceed the size of the physical disk, and formatted with gpt partition table, linux filesystem - ls -la shows the 8 TB size (virtual size), but du test1.img shows the real size (40 Kb), qemu-img info reports the same (real) disk size Only 40 Kb are really occupied, obviously not 8 TB. Only when you start to write to the vdisk space is allocated to the real disk; when you remove something from the vdisk if trim/discard is enabled then the space is unallocated again (host syncs with the guest); this doesn't happen if trim/discard is not enabled. As far as trim/discard is enabled for the vdisk, the real occupied size on the physical hd is that reported by du or qemu-img info (disk size) Can you run qemu-img info /path/to/windowsvdisk.img on the windows vdisk and report the output please? My thought is that the windows vdisk has a virtual disk size of 36 GB.. Edited March 27, 2022 by ghost82 Quote Link to comment
Linguafoeda Posted March 27, 2022 Share Posted March 27, 2022 (edited) Here's my Mac and Windows disk side by side (I just ran CleanMyMac to further clean up the mac partition down to 26.5GiB). The reason I think it's actually taking up the 100GiB space is that everywhere that my cache drive (NVMe SSD) reports used vs. free space, it is assuming the full 100GiB from the mac vdisk. For example, my usr/mnt/cache only has 5 folders, appdata is ~68GiB, system is 1GiB,, ISOs is 0GiB, Personal is ~0GiB, and that leaves my domains folder which has just two folders & files (Windows_disk.img is 36GiB + macOS_disk.img 100GiB). 68 + 1 + 0 + 0 + 36 + 100 = ~205GiB which is what Krusader and Unraid main tab show as "space used". root@:~# qemu-img info /mnt/user/domains/macOS/macOS_disk.img image: /mnt/user/domains/macOS/macOS_disk.img file format: raw virtual size: 100 GiB (107374182400 bytes) disk size: 26.5 GiB root@:~# qemu-img info /mnt/user/domains/Windows/Windows_disk.img image: /mnt/user/domains/Windows/Windows_disk.img file format: qcow2 virtual size: 100 GiB (107374182400 bytes) disk size: 36 GiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false root@:/mnt/user/domains# cd /mnt/user/domains root@:/mnt/user/domains# ls -la total 0 drwxrwxrwx 1 nobody users 24 Mar 23 00:10 ./ drwxrwxrwx 1 nobody users 20 Mar 27 04:30 ../ drwxrwxrwx 1 nobody users 32 Mar 23 00:10 Windows/ drwxrwxrwx 1 nobody users 28 Mar 22 23:05 macOS/ root@:/mnt/user/domains# cd macOS root@:/mnt/user/domains/macOS# ls -la total 27793620 drwxrwxrwx 1 nobody users 28 Mar 22 23:05 ./ drwxrwxrwx 1 nobody users 24 Mar 23 00:10 ../ -rw-rw-rw- 1 nobody users 107374182400 Mar 27 07:06 macOS_disk.img Edited March 27, 2022 by Linguafoeda Quote Link to comment
ghost82 Posted March 27, 2022 Share Posted March 27, 2022 (edited) 20 minutes ago, Linguafoeda said: it is assuming the full 100GiB from the mac vdisk. I don't think so, my test is pretty clear, you read a value but we need to know from what command it is derived. In my opinion it's only a cosmetic issue. The windows vm uses a qcow2 image not an img, even if the extension is .img, that's why values are different. If you want you can convert the img to qcow2, but it will change near nothing about the real free space, apart fixing that cosmetic issue. Edited March 27, 2022 by ghost82 Quote Link to comment
Linguafoeda Posted March 27, 2022 Share Posted March 27, 2022 (edited) Hmm interesting. Well the command I was basing my assumption on it actually taking up 100GiB was how 1. Krusader (docker app), 2. mc (blue terminal window above) and 3. Unraid GUI were reporting used space. Is there any risk to converting to qcow over raw? If there's no downside and only benefit i.e. fixing cosmetic issue at the least, I'd like to do that if possible if you could point me in the right direction Edited March 27, 2022 by Linguafoeda Quote Link to comment
ghost82 Posted March 27, 2022 Share Posted March 27, 2022 It's very simple to convert, just use: qemu-img convert -f raw -O qcow2 /path/to/vdisk.img /path/to/vdisk.qcow2 Then you need to change in the xml the disk type, from raw to qcow2 and point to the file vdisk.qcow2 Backup the vdisk.img in case something goes wrong. Quote Link to comment
Linguafoeda Posted March 27, 2022 Share Posted March 27, 2022 thank you - do i need to change the resulting file back to .img after? my Windows vdisk is "Windows_disk.img" but believe it's still qcow2 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.