DoeBoye Posted March 26, 2015 Share Posted March 26, 2015 I can't seem to find this article. Your Google-Foo is better than mine (or you're more patient ). Any chance someone could link to the specific article svp? This is the one I was originally referring to, specifically with the Samsung 840 EVO: http://www.xbitlabs.com/articles/storage/display/samsung-840-evo_6.html Thanks! I wonder what a longer wait time would yield. The review waits 30minutes before retesting. If garbage collection had longer periods of time to operate, would it result in better increases in performance? Quote Link to comment
dirtysanchez Posted March 26, 2015 Share Posted March 26, 2015 This is the one I happened across, but effectively the same results. The drives run faster with TRIM enabled than just relying on garbage collection alone. http://www.tomshardware.com/reviews/crucial-m550-ssd-review,3772-10.html Quote Link to comment
interwebtech Posted March 26, 2015 Share Posted March 26, 2015 w00t! Mar 26 14:09:37 Tower in.telnetd[19350]: connect from 192.168.1.16 (192.168.1.16) Mar 26 14:09:43 Tower login[19351]: ROOT LOGIN on '/dev/pts/0' from 'Dell-i7.home' Mar 26 14:09:51 Tower logger: /mnt/cache: 9647398912 bytes were trimmed Quote Link to comment
kaiguy Posted March 26, 2015 Share Posted March 26, 2015 I think I may have found some evidence of why I can't seem to get fstrim to work when attached to my M1015 in IT mode: http://comments.gmane.org/gmane.linux.scsi/88189 Essentially this is saying that LSI controllers require "deterministic read after trim" capability on the SSD drive in order to enable TRIM support. hdparm -I shows that my 850 EVO is missing support for this feature, but apparently the PRO versions have it. I guess its just my luck that I picked out a cache drive that doesn't work with TRIM with my hardware setup. Edit: Just to check I updated the firmware on the mobo and M1015. fstrim absolutely doesn't work with this SSD on the HBA. Oh well. Quote Link to comment
kaiguy Posted March 27, 2015 Share Posted March 27, 2015 Well, yet another datapoint. I removed the SSD from the M1015 and did a RDM of the SSD from a mobo SATA port. fstrim still doesn't work. So now I'm thinking its ESXi that might be getting in the way. No trim for me. Quote Link to comment
PCRx Posted March 28, 2015 Share Posted March 28, 2015 Well, yet another datapoint. I removed the SSD from the M1015 and did a RDM of the SSD from a mobo SATA port. fstrim still doesn't work. So now I'm thinking its ESXi that might be getting in the way. No trim for me. I can verify it does not work on my ESXI m1015 either. Quote Link to comment
impalerware Posted June 6, 2015 Share Posted June 6, 2015 Does it make sense to add the fstrim command to the end of the mover script? This way everything is moved off the cache ssd drive. Then the ssd gets trimmed. Quote Link to comment
garycase Posted June 6, 2015 Share Posted June 6, 2015 Note that virtually all modern SSDs work just fine even without OS TRIM support => as long as they have sufficient "idle" time (i.e. aren't being otherwise used but are powered on) the garbage collection algorithms in the firmware will effectively "self-TRIM" the drives. Since most UnRAID systems have a large amount of "idle time" this shouldn't be an issue. Quote Link to comment
dirtysanchez Posted June 7, 2015 Share Posted June 7, 2015 Since most UnRAID systems have a large amount of "idle time" this shouldn't be an issue. I can think of one instance where there may not be sufficient idle time, and that's Plex. Plex is chatty enough on a cache disk to keep it from ever spinning down (a platter based drive of course) with a 30 minute spin down delay. Quote Link to comment
garycase Posted June 7, 2015 Share Posted June 7, 2015 That may indeed be a problem -- you DO need some idle time for the garbage collection algorithms to work their "magic". In a case like that, OS TRIM support is much better than just depending on the drive's firmware. Quote Link to comment
spencers Posted June 25, 2015 Share Posted June 25, 2015 Has this been implemented into 6.0 final? Quote Link to comment
Ned Posted July 14, 2015 Share Posted July 14, 2015 Has this been implemented into 6.0 final? I'd like to know the answer to this question as well! Lots of people are currently using SSDs in their cache so is it fair to assume that this trim stuff is a non-issue now with v6.0.1? Quote Link to comment
adoucette Posted July 18, 2015 Share Posted July 18, 2015 Does the script in the OP perform trim on all SSD cache drives if there is a cache pool of multiple drives? Quote Link to comment
limetech Posted July 19, 2015 Share Posted July 19, 2015 This is an item on our 'todo' list to add fstrim scheduler item. Mounting with 'discard' option is not a very good solution since it can cause operations such as adding/removing devices to a pool take a looooong time, as well as 'mount' itself. Note that if you have SSD's in the parity-protected array and run an 'fstrim' on the corresponding 'md' device it may cause parity corruption and is not recommended until we can address this. Quote Link to comment
tcharron Posted July 27, 2015 Share Posted July 27, 2015 Maybe it goes without saying for others, but it wasn't obvious to me... Trim wont work on your cache drive if it is formatted as reiserfs. I have upgraded from v5 and not yet er reformatted my cache drive. I was getting: root@Tower:/mnt# /sbin/fstrim -v /mnt/cache/ | logger fstrim: /mnt/cache/: FITRIM ioctl failed: Inappropriate ioctl for device root@Tower:/mnt# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on tmpfs tmpfs 131072 79440 51632 61% /var/log /dev/sda vfat 15137792 1843424 13294368 13% /boot /dev/sdb1 ext2 59058092 29936144 26121952 54% /mnt/myapps /dev/md2 xfs 3905110812 3337872796 567238016 86% /mnt/disk2 /dev/md3 xfs 3905110812 3841775564 63335248 99% /mnt/disk3 /dev/md4 reiserfs 2930177100 2674140732 256036368 92% /mnt/disk4 /dev/md5 reiserfs 2930177100 2725743564 204433536 94% /mnt/disk5 /dev/md6 xfs 3905110812 3780511556 124599256 97% /mnt/disk6 /dev/sdc1 reiserfs 117217208 17759172 99458036 16% /mnt/cache shfs fuse.shfs 20504522376 16368851696 4135670680 80% /mnt/user0 shfs fuse.shfs 20621739584 16386610868 4235128716 80% /mnt/user /dev/loop0 btrfs 10485760 1563720 6955000 19% /var/lib/docker /dev/md1 xfs 2928835740 8807484 2920028256 1% /mnt/disk1 Quote Link to comment
gundamguy Posted July 27, 2015 Share Posted July 27, 2015 The method shown in the OP still works, but if you want to use the features of unRAID 6 a bit more and not risk messing up your Go file, you can put the command in a .cron file and store it on your flash under the plugins directory. This way you can avoid messing with the Go file at all. Quote Link to comment
dlandon Posted October 24, 2015 Author Share Posted October 24, 2015 I have added the procedure to add the .cron file to have dynamix manage the SSD trim event. This is the preferred method for unRAID 6 because you do not have to modify the go file, and gives more flexibility to the cron event timing. Quote Link to comment
bonienl Posted October 24, 2015 Share Posted October 24, 2015 As an intermediate step I have created a new plugin "SSD TRIM", which allows to set a schedule for the TRIM operation. The set up can be found under Settings -> Scheduler -> SSD TRIM Settings See Dynamix plugins for installation. Thanks dlandon for all the research. Quote Link to comment
interwebtech Posted October 24, 2015 Share Posted October 24, 2015 thanks dlandon & bonienl Quote Link to comment
trurl Posted October 24, 2015 Share Posted October 24, 2015 As an intermediate step I have created a new plugin "SSD TRIM", which allows to set a schedule for the TRIM operation. The set up can be found under Settings -> Scheduler -> SSD TRIM Settings See Dynamix plugins for installation. Thanks dlandon for all the research. Does that plugin work with cache pools? Quote Link to comment
bonienl Posted October 24, 2015 Share Posted October 24, 2015 I am using a SSD cache pool myself, and the trim operation works for me. Quote Link to comment
ptirmal Posted November 3, 2015 Share Posted November 3, 2015 As an intermediate step I have created a new plugin "SSD TRIM", which allows to set a schedule for the TRIM operation. The set up can be found under Settings -> Scheduler -> SSD TRIM Settings See Dynamix plugins for installation. Thanks dlandon for all the research. What should the schedule be set for for the SSD TRIM operation? Quote Link to comment
bonienl Posted November 3, 2015 Share Posted November 3, 2015 I've set it to once a week, that seems to be a good compromise for me. Quote Link to comment
MyKroFt Posted November 4, 2015 Share Posted November 4, 2015 How about unassigned devices. It's trying to trim my cache which is a reg hd. I have 2 ssd for my docker and vms stuff Quote Link to comment
Gizmotoy Posted November 23, 2015 Share Posted November 23, 2015 How about unassigned devices. It's trying to trim my cache which is a reg hd. I have 2 ssd for my docker and vms stuff Wondering the same myself. I run without a cache, and use an SSD for my docker/vms/etc. It would be nice to be able to run trim on those as well. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.