markswift Posted October 30, 2018 Share Posted October 30, 2018 (edited) Would love to get to the bottom of this - the fact is worked perfect before... I don't like the thought of my 3x1TB drives going TRIMless... Hope this can be solved. Edited October 30, 2018 by markswift Quote Link to comment
debit lagos Posted November 2, 2018 Share Posted November 2, 2018 On 10/29/2018 at 1:14 PM, limetech said: There are no longer any 6.6.0-rc's since 6.6 has been released as 'stable'. re: util-linux - we avoid large package updates in stable point releases, unless required for security patch, or other crucial bug fix. The next update of that package will be in 6.7. re: mpt3sas: generally anything other than 'bug fixes' are not backported from linux RC's. We are also updating to 4.19 kernel in Unraid 6.7. Not sure if the mptsas driver is updated there or not. There seems to be a lot of SCSI changes in 4.19. Based on looking at the latest discussion in the Linux SCSI development, I believe there are several significant mpt3sas changes made: mpt3sas: Added new #define variable IOC_OPERATIONAL_WAIT_COUNT mpt3sas: Separate out mpt3sas_wait_for_ioc mpt3sas: Refactor mpt3sas_wait_for_ioc function mpt3sas: Call sas_remove_host before removing the target devices mpt3sas: Fix Sync cache command failure during driver unload mpt3sas: Don't modify EEDPTagMode field setting on SAS3.5 HBA devices mpt3sas: Fix driver modifying persistent data in Manufacturing page11 mpt3sas: Bump driver version to 27.100.00.00. > - Fix removed q->mq_ops non-NULL check in wbt_enable_default() > - Remove spurious return in ide-io.c:ide_timer_expiry() > - Dropped DM legacy path removal patch, now in mainline > - Dropped ib_srp patch, now in mainline > - Fixed a missing port unlock in IDE > - Add SCSI ufs to the BSG conversions > - Add patch to remove bsg-lib queue hook dependencies > - Fixed missing clear of IO contexts > - Added blk-mq backend for blk_lld_busy() PLUS, it sounds like there are some signification performance improvements (https://www.spinics.net/lists/linux-scsi/msg125041.html) in 4.19 that trigger some happy dancing... Concluding, I guess the last point that one could make is there are several utilities updated in the "rolling release" world that make be applicable to assisting in resolving the HBA situation. I wait in great anticipation for Unraid 6.7. It may be the "Honey Hole" (love me some of my hometown heroes, American Pickers) of an update that we all are looking for. Thank you @limetech for your insight and update. 1 Quote Link to comment
slimshizn Posted December 27, 2018 Share Posted December 27, 2018 Also awaiting 6.7 and hoping that it's enabled or fixed. Watching this topic to see further changes. Quote Link to comment
debit lagos Posted December 31, 2018 Share Posted December 31, 2018 @slimshizn I've been patiently trying not to post anything ETA-like verbiage on here. But I'm pretty sure a good Unraid system re-baseline, with the newly released 4.20 linux kernel, should provide the improvements/fixes we'd expect (fingers crossed). 1 Quote Link to comment
therapist Posted December 31, 2018 Author Share Posted December 31, 2018 4 minutes ago, debit lagos said: @slimshizn I've been patiently trying not to post anything ETA-like verbiage on here. But I'm pretty sure a good Unraid system re-baseline, with the newly released 4.20 linux kernel, should provide the improvements/fixes we'd expect (fingers crossed). What if anything have you seen in the changelogs that would lead you to believe the TRIM issue on HBAs is corrected in 4.20? Seeing that 6.7 may only have 4.19 it looks like we're still off there Quote Link to comment
debit lagos Posted December 31, 2018 Share Posted December 31, 2018 Just now, therapist said: What if anything have you seen in the changelogs that would lead you to believe the TRIM issue on HBAs is corrected in 4.20? Seeing that 6.7 may only have 4.19 it looks like we're still off there The last time I checked, most of the SCSI changes were implemented in 4.19. I haven't done a full 4.19 vs 4.20 breakdown in the SCSI and FS areas/modules to see what additional changes were implemented. If 6.7 drops with 4.19, we "should" be good. If 6.7 comes with a Slackware re-baseline, even better, as there are several updated packages that would compliment the improvements. The other aspect I have been becoming familiar with is UNMAP. Similar to FSTRIM, it provides instructions to the PCI bus to perform certain actions. Again, learning as time permits. Nevertheless, it seems the SCSI community acknowledged the collapse of several modules and programming language/library optimization has effected several functionalities in the HBA world. I'm really hoping it all comes to bed at 4.19 or 4.20. Again, fingers crossed. 2 Quote Link to comment
JorgeB Posted January 17, 2019 Share Posted January 17, 2019 On 10/13/2018 at 11:45 PM, johnnie.black said: There are reports that the issue remains even with the newer SAS3 models, like the 9300i, 9305, etc. I can now confirm trim works with SAS3 models and current Unraid, at least it does on a 9300-8i, but like with all LSI HBAs it only works on SSDs with RZAT or DRAT, e.g.: OK hdparm -I /dev/sdc | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM Not OK hdparm -I /dev/sdb | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) 2 Quote Link to comment
debit lagos Posted January 18, 2019 Share Posted January 18, 2019 On 1/17/2019 at 8:39 AM, johnnie.black said: I can now confirm trim works with SAS3 models and current Unraid, at least it does on a 9300-8i, but like with all LSI HBAs it only works on SSDs with RZAT or DRAT, e.g.: OK hdparm -I /dev/sdc | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM Not OK hdparm -I /dev/sdb | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) Sir, greatly appreciate the update. I would suspect you are testing the non-released beta version? Would you be able to confirm what version/increment of kernel you are now on? Again @johnnie.black, thank you very much for the update. 1 Quote Link to comment
slimshizn Posted January 18, 2019 Share Posted January 18, 2019 (edited) Oh wow well this is a bit of a game changer. I'll be able to add more ssds for raid10 and UD. Edit: never mind, I have the LSI SAS2008-8I SATA 9211-8i. Eh well, maybe later on something will change. Edited January 19, 2019 by slimshizn Quote Link to comment
JorgeB Posted January 18, 2019 Share Posted January 18, 2019 2 hours ago, debit lagos said: Sir, greatly appreciate the update. I would suspect you are testing the non-released beta version? Would you be able to confirm what version/increment of kernel you are now on? No, this was tested with v6.6.6 Quote Link to comment
therapist Posted January 20, 2019 Author Share Posted January 20, 2019 On 1/17/2019 at 8:39 AM, johnnie.black said: I can now confirm trim works with SAS3 models and current Unraid, at least it does on a 9300-8i, but like with all LSI HBAs it only works on SSDs with RZAT or DRAT, e.g.: Is this through a backplane or direct with HBA? I am extremely curious as to why the LSI devs would no longer support SAS2 TRIM funtionality Quote Link to comment
JorgeB Posted January 20, 2019 Share Posted January 20, 2019 7 hours ago, therapist said: Is this through a backplane or direct with HBA? Direct Quote Link to comment
JorgeB Posted February 11, 2019 Share Posted February 11, 2019 1 hour ago, JoeUnraidUser said: I hope this TRIM problem is fixed in 6.7.0 especially since this hardware is recommended as compatible in the Wiki. It's not an Unraid problem, it's the Linux LSI driver, current status is the same as my post above on January 17th. Quote Link to comment
limetech Posted February 12, 2019 Share Posted February 12, 2019 1 hour ago, JoeUnraidUser said: Mostly I was pissed off that they recommended hardware that they knew didn't work. Who's "they"? 1 hour ago, johnnie.black said: It's not an Unraid problem, it's the Linux LSI driver, current status is the same as my post above on January 17th. What is the root problem? Does the controller not transparently pass through the attached device? Quote Link to comment
therapist Posted February 12, 2019 Author Share Posted February 12, 2019 4 hours ago, JoeUnraidUser said: Is there any progress with this problem? I had 2 Marvell SATA Controllers which were not compatible with 6.7.0. About 2 weeks ago I replaced them with 2 LSI SAS 9211-8i HBAs. You may ask why I would specifically buy those cards. Because in the Wiki on the Hardware Compatibility page under PCI SATA Controllers it recommends the LSI SAS 9211-8i. I hope this TRIM problem is fixed in 6.7.0 especially since this hardware is recommended as compatible in the Wiki. edit: Toned down the language. I don't think the 9211 cards were compatible with SSD TRIM at all....the 9207 was typically sought out as the card for SSD operation. It has more to do with LSI/Broadcom implementations than unraid. Just because @limetech has a "compatibility matrix" it doesn't mean that every combination of hardware is guaranteed to work. The end user has to do their due diligence. In my first post, I thought I was being pretty thorough...even buying additional hardware to try and make it work. Seems somewhere along the line the mpt2SAS driver has dropped TRIM support for the most commonly used drives? I am in the process of moving most of my drives to a direct attach backplane & will be experimenting more when done. Quote Link to comment
JorgeB Posted February 12, 2019 Share Posted February 12, 2019 What is the root problem? Does the controller not transparently pass through the attached device? Up to Unraid 6.3.5, mpt3sas driver 13.100.00.00, TRIM works on LSI SAS2 and SAS3 HBAs, starting with Unraid 6.4.1, mpt3sas driver 15.100.00.00, TRIM stopped working on SAS2 HBAs, like the 9211-8i, 9207-8i, etc, but still works on SAS3 HBAs like the 9300-8i. But note that for all cases, TRIM with LSI HBAs only works on SSDs with deterministic read zeros after TRIM, for SSDs with no deterministic read after TRIM you get the standard TRIM unsupported error when running fstrim: the discard operation is not supported When running fstrim with a SAS2 LSI HBA on an SSD with deterministic read trim support and latest drivers you get a different more cryptic error:FITRIM ioctl failed: Remote I/O error Quote Link to comment
JorgeB Posted February 12, 2019 Share Posted February 12, 2019 One more thing, since this thread is also about Samsung SSDs, all Samsung consumer SSD models prior to the 860 EVO don't support deterministic reads after TRIM, so if for example you have an 850 EVO it will never be trimmed by an LSI HBA. I believe the PRO models are different, and most support it, you can easily check with hdparm: OK for LSI HBA: hdparm -I /dev/sdc | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM Not OK for LSI HBA: hdparm -I /dev/sdb | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) Quote Link to comment
limetech Posted February 12, 2019 Share Posted February 12, 2019 8 hours ago, johnnie.black said: One more thing, since this thread is also about Samsung SSDs, all Samsung consumer SSD models prior to the 860 EVO don't support deterministic reads after TRIM, so if for example you have an 850 EVO it will never be trimmed by an LSI HBA. I believe the PRO models are different, and most support it, you can easily check with hdparm: OK for LSI HBA: hdparm -I /dev/sdc | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM Not OK for LSI HBA: hdparm -I /dev/sdb | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) Ok that's good info, but here's what I was wondering: when you attach devices to this controller, are you talking about a situation where the controller is organizing those devices into some kind of RAID and then passing the "volume" to Linux? OR, is it operating in "jbod" mode where devices are simply passed through? If the latter, it should not matter what flavor of TRIM the device supports. If the former, then sure a RAID controller operating in raid-5/6 is going to want either RZAT or DRAT - this is the reason we don't support TRIM in the Unraid parity-protected array yet (because I haven't added the necessary code in the driver to support RZAT or DRAT and detect which is present). Quote Link to comment
JorgeB Posted February 12, 2019 Share Posted February 12, 2019 1 minute ago, limetech said: are you talking about a situation where the controller is organizing those devices into some kind of RAID and then passing the "volume" to Linux? No, LSI trim only works for HBAs in IT mode, no RAID, but still requires determinist read after TRIM, don't know why. Quote Link to comment
therapist Posted February 12, 2019 Author Share Posted February 12, 2019 (edited) 15 minutes ago, johnnie.black said: No, LSI trim only works for HBAs in IT mode, no RAID, but still requires determinist read after TRIM, don't know why. I don't think this is necessarily the case as the LSI/Broadcom compatibility matrix lists (for sake of the topic) Samsung EVO drives as compatible with 9207 chipsets. As evidenced by my earlier tests, even an 840 PRO with DRAT is not working properly in later builds & I would imagine that there are plenty of users using earlier builds that had functioning TRIM with their EVO SSDs on HBAs. EDIT:https://docs.broadcom.com/docs-and-downloads/host-bus-adapters/IT_SAS_Gen_2.5CompatibilityList.pdf Edited February 12, 2019 by therapist Quote Link to comment
therapist Posted February 12, 2019 Author Share Posted February 12, 2019 22 minutes ago, limetech said: ...OR, is it operating in "jbod" mode where devices are simply passed through? If the latter, it should not matter what flavor of TRIM the device supports. I was under the impression that HBAs in IT mode worked exactly like this, but somewhere along the line the mpt2SAS has reduced compatibility. There is a whole plugin dedicated to TRIM automation, I wonder how many users are out there & just don't realize it isn't working anymore. Not that it is the end of the world, but simply relying on internal GC for an SSD is not the best course. Quote Link to comment
JorgeB Posted February 12, 2019 Share Posted February 12, 2019 14 minutes ago, therapist said: I would imagine that there are plenty of users using earlier builds that had functioning TRIM with their EVO SSDs on HBAs. I tested with the old driver included with v6.3.5 and no LSI HBA trims a non RZAT SSD, I tested with the 9211-8i, 9207-8i and 9300-8i. Quote Link to comment
JorgeB Posted February 12, 2019 Share Posted February 12, 2019 Also check this: https://www.broadcom.com/support/knowledgebase/1211161496937/trim-and-sgunmap-support-for-lsi-hbas-and-raid-controllers Quote Information LSI 3ware and MegaRAID controllers set up in RAID do not support TRIM. LSI SAS HBAs with IR firmware do not support TRIM. LSI SAS HBAs with IT firmware do support TRIM, but with these limitations: The drives must support both “Data Set Management TRIM supported (limit 8 blocks)” and “Deterministic read ZEROs after TRIM” in their ATA options. The Samsung 850 PROs don’t have “Deterministic read ZEROs after TRIM” support, and thus TRIM cannot be run on these drives when attached to a LSI SAS HBAs with IT firmware 1 Quote Link to comment
therapist Posted February 12, 2019 Author Share Posted February 12, 2019 15 minutes ago, johnnie.black said: also check this: https://www.broadcom.com/support/knowledgebase/1211161496937/trim-and-sgunmap-support-for-lsi-hbas-and-raid-controllers Well there you go Next question is, can sg_unmap be used in place for TRIM on HBAs? does not look like unRAID has the sg_unmap command baked in Quote Link to comment
JorgeB Posted February 12, 2019 Share Posted February 12, 2019 (edited) 5 minutes ago, therapist said: Next question is, can sg_unmap be used in place for TRIM on HBAs? I did try it, and IIRC it worked even on SAS2 HBAs, but I didn't see a way to use it like fstrim. i.e., you need to specify which blocks you're unmapping, I used it on the whole device, and it resulted in all sectors wiped, same as if using blkdiscard. Edited February 12, 2019 by johnnie.black Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.