Samsung SSD HBA TRIM


therapist

Recommended Posts

Quote

In this example there is a SATA SSD at sdc.  To tell the capacity of the SATA SSD:

 

sg_readcap /dev/sdc

 

Read Capacity results:

    Last logical block address=117231407 (0x6fccf2f), Number of block=117231408

    Logical block length=512 bytes

Hence:

  Device size: 60022480896 bytes, 57241.9 MiB, 60.02 GB 

 

Then run the sg_unmap command:

 

sg_unmap --lba=0 --num=117231407 /dev/sdc

or

sg_unmap --lba=0 --num=117231408 /dev/sdc

 

Seems pretty straightforward to make a BASH script to extract last LBA address and run sg_unmap

When I run the command in 6.6.6 console I get command not found...is it something in Nerd Pack perhaps it came with?

Link to comment
5 minutes ago, therapist said:

Seems pretty straightforward to make a BASH script to extract last LBA address and run sg_unmap

Sorry, I wasn't very clear, if you run sg_unmap on the whole device it will wipe it completely, like running blkdiscard, not like fstrim that only unmaps the sectors marked as empty, at least that was what happened when I did it, feel free to try but do it on an empty SSD.

 

To install download the package and put in inside a folder called "extra" on the flash drive, Unraid will then install it at every boot.

 

 

Edited by johnnie.black
Link to comment
On 2/12/2019 at 10:08 AM, johnnie.black said:

Clearly this was a business decision not a technical decision.  That is, suppose they allow any form of TRIM for jbod/raid0/raid1 and customers go ahead and define some of these volumes.  Then customer decides to convert their volumes to raid5/6 - well now there is a problem if they have the wrong kind of devices - that is, one raid level "worked" now different raid level "doesn't work".  This results in unhappy customers, who should have known better, but blame company anyway.  Now company has huge number of complaints on their forums, complaints in email, and other nonsense that costs company a lot of time and money.  So someone "at the top" at LSI Broadcom said, "Hey man, we don't want to deal with this, just don't support the wrong kind of trim AT ALL".  I totally understand the business sentiment, if not the technical sentiment.

Link to comment
1 minute ago, limetech said:

Clearly this was a business decision not a technical decision.  That is, suppose they allow any form of TRIM for jbod/raid0/raid1 and customers go ahead and define some of these volumes. 

Agree, for me the only strange thing about this is trim no longer working with LSI SAS2 HBAs now for over a year or two, even when using the required SSDs, it appears to be a Linux driver issue, but I did a lot of googling and can't find any info about it or other people complaining.

Link to comment

It didn't make sense if it was, but just to confirm this isn't an Unraid problem, I installed latest Fedora workstation, it comes with mpt3sas v25.100.00.00, same an Unraid v6.6, and get exactly the same cryptic error when trying to trim a supported SSD with a SAS2 LSI:

 

FITRIM ioctl failed: Remote I/O error

 

 

 

 

 

 

Link to comment

Not sure if this helps matters; but both my SSD drives correctly list TRIM:

 

 hdparm -I /dev/sdg | grep TRIM

  * Data Set Management TRIM supported (limit 8 blocks)

  * Deterministic read ZEROs after TRIM

 

These are 2x 860 EVO drives in a RAID1 configuration as my cache drive. Running the latest 6.7RC3

Link to comment
9 minutes ago, Trunkz said:

Not sure if this helps matters; but both my SSD drives correctly list TRIM:

 

 hdparm -I /dev/sdg | grep TRIM

  * Data Set Management TRIM supported (limit 8 blocks)

  * Deterministic read ZEROs after TRIM

 

These are 2x 860 EVO drives in a RAID1 configuration as my cache drive. Running the latest 6.7RC3

That's a nice find

I just verified my 860 EVO also supports DRAT

Also got my hands on a few Intel Pro 2300 SSDs which also support DRAT

 

Do you have your 860s going through a backplane or expander @Trunkz

Link to comment
1 hour ago, johnnie.black said:

It didn't make sense if it was, but just to confirm this isn't an Unraid problem, I installed latest Fedora workstation, it comes with mpt3sas v25.100.00.00, same an Unraid v6.6, and get exactly the same cryptic error when trying to trim a supported SSD with a SAS2 LSI:

 

 


FITRIM ioctl failed: Remote I/O error
 

I tried again with an Intel PRO 2500  SSD w/ the same results on a 9207-8e w/ expander & a 9211-8i w/ passthrough backplane

Link to comment
6 minutes ago, johnnie.black said:

I would say that for now, and likely for the foreseeable future, these are the options to get your SSDs trimmed:

 

-Use the onboard SATA ports, it works with any SSD

-if your SSDs support determinist trim and you really want to have them connected to the HBA get a SAS3 LSI, like the 9300-8i.

How can I check to see if SAS2 works under 6.7?

Link to comment
2 minutes ago, johnnie.black said:

I would say that for now, and likely for the foreseeable future, these are the options to get your SSDs trimmed:

 

-Use the onboard SATA ports, it works with any SSD

-if your SSDs support determinist trim and you really want to have them connected to the HBA get a SAS3 LSI, like the 9300-8i.

@johnnie.black I did submit a somewhat lengthy email to the SCSI team with reference to the situation.  I'm not sure as to the official means of approach with them, but I hope to gain some insight from there perspective.  On one hand, looking/comparing the MPT2SAS/MPT3SAS merge from the beginning, up to the point of kernel 4.20 release, it looks like several corrections have been made.  Again, I'm no expert and haven't slept at a Holiday Inn for quite some time.  Nevertheless, I look forward to any response I receive from that team. 

 

Your options aren't unreasonable by any means.  SAS3 controllers are slowly coming down in price.  If I was thinking about future capabilities and functions of unraid, an SSD only array would require a SAS3 controller, ensuring the necessary house cleaning actions could function appropriately.

 

In my situation, I'm comfortable with waiting to see what 6.7 addresses WRT this TRIM situation.  If it doesn't address or resolve the situaiton, I may progress to the SAS3 series controller.  I don't like jumping to conclusions (can be costly), so I'll turn my patience dial up a few more ticks and wait and see...

Link to comment
5 minutes ago, johnnie.black said:

BTW, also tested on Arch Linux with kernel 4.20, latest mpt3sas v26.100.00.00, no change.

Simply out of curiosity, I had concluded with the same end result late yesterday afternoon.  As my setup was a undesirable hardware approach to verify/validate, I appreciate your similar results.

  • Upvote 1
Link to comment

I'm building a box running 6.6.6 with a 9207-8i in IT mode (20.00.00.00 at the moment, plan to update to 20.00.07.00) that has two Intel S3500s connected via SAS backplane - one as cache and one as a VM boot disk. I've experienced the same issue reported, but with a possible twist - despite being connected to the same controller (different ports), one SSD will TRIM, the other will not.

 

root@Tower:/var/log# hdparm -I /dev/sdc1|grep TRIM
           *    Data Set Management TRIM supported (limit 4 blocks)
           *    Deterministic read ZEROs after TRIM
root@Tower:/var/log# hdparm -I /dev/sdb1|grep TRIM
           *    Data Set Management TRIM supported (limit 4 blocks)
           *    Deterministic read ZEROs after TRIM

 

(These are 'limit 4 blocks' while LSI states 'limit 8 blocks' - unsure if this matters, but the results below suggest it doesn't.)

 

root@Tower:/var/log# fstrim -av
/mnt/disks/vm_ssd_1: 0 B (0 bytes) trimmed
fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error
/etc/libvirt: 922.2 MiB (966979584 bytes) trimmed
/var/lib/docker: 18 GiB (19329028096 bytes) trimmed

 

You can see that the cache disk fails, but the VM boot disk succeeded. Note this was a 2nd or 3rd run, so no data was TRIM'd on VM_SSD_1. On the first pass, ~200GB was TRIM'd. Other than the connected port, the only difference is the file system.

 

root@Tower:/var/log# df -Th
Filesystem     Type       Size  Used Avail Use% Mounted on
...
/dev/sdc1      btrfs      224G   17M  223G   1% /mnt/cache
/dev/sdb1      ext4       220G   60M  208G   1% /mnt/disks/vm_ssd_1

 

I thought it was odd enough to mention. I'm short a few onboard SATA ports to make this work, so it looks like a 9300-8i or bust at this point.

Link to comment
2 hours ago, punk said:

/dev/sdb1 ext4 220G 60M 208G 1% /mnt/disks/vm_ssd_1

This might help explain why I can't find many other Linux users complain about the LSI trim issue, if they are using ext4 looks like it still works, still would expect at least some xfs users complaining, maybe not so much with btrfs since it's likely much less used.

Link to comment
6 minutes ago, johnnie.black said:

This might help explain why I can't find many other Linux users complain about the LSI trim issue, if they are using ext4 looks like it still works, still would expect at least some xfs users complaining, maybe not so much with btrfs since it's likely much less used.

Exactly!  The story of my past four months.  Further complicated by a lack of understanding as to why you would use this card to do what you want to do with it.  Compounded by, "wait, what!  You want to hook SSD drives to it as well, and it support and perform TRIM?"  The IRC chat forums can be helpful, but in this case they were more amused by the situation I was presenting.  Most concluded with, "there is  no problem, just connect your SSDs to your motherboard SATA ports and all will be well."

 

Hadn't thought about reformatting to ext4 for most of my SSDs.  But for my cache (BTRFS) I don't think I will be so lucky.  On a happier note, I was able to find genuine LSI 9300-8i (used) on eBay for $130.  Unfortunately I'm still filling out my Urgent Operational Needs Statement to convey the urgency of this purchase to the home sergeant major.

Link to comment
17 minutes ago, johnnie.black said:

This might help explain why I can't find many other Linux users complain about the LSI trim issue, if they are using ext4 looks like it still works, still would expect at least some xfs users complaining, maybe not so much with btrfs since it's likely much less used.

I failed to mention that the other "disks" that succeeded are btrfs. I assume these are calls to paths located somewhere else.

root@Tower:/var/log# df -Th
Filesystem     Type       Size  Used Avail Use% Mounted on
...
/dev/loop2     btrfs       20G   17M   18G   1% /var/lib/docker
/dev/loop3     btrfs      1.0G   17M  905M   2% /etc/libvirt
...
/dev/sdc1      btrfs      224G   17M  223G   1% /mnt/cache
/dev/sdb1      ext4       220G   60M  208G   1% /mnt/disks/vm_ssd_1

If I could switch the cache to ext4, I'd be set.

Edited by punk
Link to comment

I've tossed around the idea of moving my cache drive (redundancy not needed in my case) to a PCIe adapter card and leave all 8 ports on the 9207-8i for "other" use. It's that or for the sake of a skewed sense of "doing it right," I spend ~$300 on a 9300-8i, assuming I can find a genuine one.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.