Fusion IO drivers


Recommended Posts

On 11/20/2019 at 11:25 AM, limetech said:

What's the name of the driver?

Fusion IO (Sandisk, now owned by WD) supports RHEL; SLES; OEL; CentOS; Debian; Ubuntu

https://link.westerndigital.com/enterprisesupport/software-download.html

 

SUSE 11/12 current drivers and utilities:


SX300/SX350/PX600 Linux_sles-12 driver v4.3.6 20191116 (current)
          - SRC -> iomemory-vsl4-4.3.6.1173-1.src.rpm
          - BIN -> iomemory-vsl4-3.12.49-11-default-4.3.6.1173-1.x86_64.rpm
                  -> iomemory-vsl4-4.4.21-69-default-4.3.6.1173-1.x86_64.rpm
                  -> iomemory-vsl4-4.4.73-5-default-4.3.6.1173-1.x86_64.rpm
          - Utility -> fio-preinstall-4.3.6.1173-1.x86_64.rpm
                     -> fio-sysvinit-4.3.6.1173-1.x86_64.rpm
                     -> fio-util-4.3.6.1173-1.x86_64.rpm

 

SX300/SX350/PX600 Linux_sles-11 driver v4.3.6 20191116 (current)
          - SRC -> iomemory-vsl4-4.3.6.1173-1.src.rpm
          - BIN -> iomemory-vsl4-3.0.101-63-default-4.3.6.1173-1.x86_64.rpm
                  -> iomemory-vsl4-3.0.101-63-xen-4.3.6.1173-1.x86_64.rpm
                  -> iomemory-vsl4-3.0.76-0.11-default-4.3.6.1173-1.x86_64.rpm
                  -> iomemory-vsl4-3.0.76-0.11-xen-4.3.6.1173-1.x86_64.rpm
          - Utility -> fio-preinstall-4.3.6.1173-1.x86_64.rpm
                     -> fio-sysvinit-4.3.6.1173-1.x86_64.rpm
                     -> fio-util-4.3.6.1173-1.x86_64.rpm

 

 

ioDrive/ioDrive2/ioDrive2Duo/ioScale Linux_sles-12 driver v3.2.16 20180912 (current)
          - SRC -> iomemory-vsl-3.2.16.1731-1.0.src.rpm
          - BIN -> iomemory-vsl-4.4.21-69-default-3.2.16.1731-1.0.x86_64.rpm
                  -> iomemory-vsl-4.4.73-5-default-3.2.16.1731-1.0.x86_64.rpm
          - Utility -> fio-common-3.2.16.1731-1.0.x86_64.rpm
                      -> fio-preinstall-3.2.16.1731-1.0.x86_64.rpm
                      -> fio-sysvinit-3.2.16.1731-1.0.x86_64.rpm
                      -> fio-util-3.2.16.1731-1.0.x86_64.rpm

 

ioDrive/ioDrive2/ioDrive2Duo/ioScale Linux_sles-11 driver v3.2.16 20180912 (current)
          - SRC -> iomemory-vsl-3.2.16.1731-1.0.src.rpm
          - BIN -> iomemory-vsl-3.0.101-63-default-3.2.16.1731-1.0.x86_64.rpm
                  -> iomemory-vsl-3.0.101-63-xen-3.2.16.1731-1.0.x86_64.rpm
                  -> iomemory-vsl-3.0.76-0.11-default-3.2.16.1731-1.0.x86_64.rpm
                  -> iomemory-vsl-3.0.76-0.11-xen-3.2.16.1731-1.0.x86_64.rpm
          - Utility -> fio-common-3.2.16.1731-1.0.noarch.rpm
                      -> fio-preinstall-3.2.16.1731-1.0.noarch.rpm
                      -> fio-sysvinit-3.2.16.1731-1.0.noarch.rpm
                      -> fio-util-3.2.16.1731-1.0.noarch.rpm
                      -> lib32vsl-3.2.16.1731-1.i686.rpm

Link to comment
16 minutes ago, limetech said:

This is a lot of work and may not even build on latest Linux kernels.

True, I appreciate any time you can devote to this matter.


As you are probably aware, these devices are high IOPS/high STR/low latency flash pcie cards with extremely long endurance ratings that can be had for <$0.08/GB now.  They are ideal for cache and VM, especially at this price point however are 100% driver/VSL dependant.

I am in contact with a guy that worked on development and support of these cards for FusionIO/Sandisk/WD.

Dunno if it helps, but this was his reply when asked the same thing regarding driver inclusion about the gen2/ioDriveII product:

 

"So unraid is slackware, and as such is using a 4.x kernel right now. You'd have to go into the driver download section for fedora/etc that feature a 4.x kernel and grab the iomemory-vsl-3.2.15.1699-1.0.src.rpm that is available there.

I'd probably stand up a development slack VM with the kernel headers/build env setup and use that to build your kernel module for the ioDrives.

As someone has already stated, if you update unraid, that ioDrive kernel module won't load and you'll have to build a new one for your newer kernel before the drives will come back online.

You can set stuff up with dkms to auto-rebuild on new kernel updates, but that can sometimes be a bit of a learning curve...
-- Dave"

 

Previously, I just dumped a directory of all support files for SLES11/12 .... is there anything I can do or ask Dave to make driver integration easier?

Link to comment
2 minutes ago, jonnygrube said:

So unraid is slackware, and as such is using a 4.x kernel right now.

This is not correct.  We use slackware packages but we keep up with kernel development.  For example just released Unraid 6.8.0-rc7 is using latest Linux stable release 5.3.12.  Upcoming Unraid 6.9 will no doubt use kernel 5.4.x.

 

It would be nice if these drivers were merged into mainline - ask him if that would be possible.   Otherwise a vanilla set of driver source and Makefile is all we need if he can get it to compile against latest Linux kernels.

Link to comment
  • 1 month later...
  • 4 weeks later...

Is the licensing for the fusionio drivers on Linux such that Limetech would even be allowed to distribute them as part of Unraid?    From what I have seen normally the end-user compiles themselves for themselves on their own Linux system.  I could not find the current definitive statement on the licensing terms so I could be wrong about that.

Link to comment
1 hour ago, itimpi said:

Is the licensing for the fusionio drivers on Linux such that Limetech would even be allowed to distribute them as part of Unraid?    From what I have seen normally the end-user compiles themselves for themselves on their own Linux system.  I could not find the current definitive statement on the licensing terms so I could be wrong about that.

I don't believe licensing is the issue,  these are "EOL/past their 5yr support agreement" according to FusionIO ... so they are not guaranteeing continued updates that would work with newer kernels in the future.  Since they are completely reliant on software to work, this kind of tombstones the devices for current kernels when support stops.

That being said as of right now, new drivers are being released every couple months... with the latest released Jan 30 2020 (see attached).


Regardless, I'm trying to figure out a way to incentivize Dave @ servethehome forums (former fusionio software dev) to pitch in.  Any and all ideas are welcome.

On 11/22/2019 at 4:59 PM, limetech said:

This is not correct.  We use slackware packages but we keep up with kernel development.  For example just released Unraid 6.8.0-rc7 is using latest Linux stable release 5.3.12.  Upcoming Unraid 6.9 will no doubt use kernel 5.4.x.

 

It would be nice if these drivers were merged into mainline - ask him if that would be possible.   Otherwise a vanilla set of driver source and Makefile is all we need if he can get it to compile against latest Linux kernels.

It doesn't look like merging these drivers into the mainline is possible now.  Is it possible to move forward with the source supplied by WD?

fusionio.jpg

Link to comment
  • 2 weeks later...

I have managed to get the driver compiled and working. Formatted as xfs and mounted with no issues. The issue is with array configurator not displaying the device, same for unassigned devices plugin.

 

I suspect it is due to the drive reporting as block device but under fio, /dev/fioa, /dev/fioa1 rather than sd*. Udev permanent storage and probably all other unraid specific scripts look for sd* & nvme*. This is what I am trying to establish now.

Link to comment
1 hour ago, mmx01 said:

I have managed to get the driver compiled and working. Formatted as xfs and mounted with no issues. The issue is with array configurator not displaying the device, same for unassigned devices plugin.

 

I suspect it is due to the drive reporting as block device but under fio, /dev/fioa, /dev/fioa1 rather than sd*. Udev permanent storage and probably all other unraid specific scripts look for sd* & nvme*. This is what I am trying to establish now.

If you do manage to completely get it to work as a cache drive, I'd be forever thankful if you could make a guide on how others can do so as I'd really like to make use of my ioDrive 2.

Link to comment
4 hours ago, mmx01 said:

I have managed to get the driver compiled and working. Formatted as xfs and mounted with no issues. The issue is with array configurator not displaying the device, same for unassigned devices plugin.

 

I suspect it is due to the drive reporting as block device but under fio, /dev/fioa, /dev/fioa1 rather than sd*. Udev permanent storage and probably all other unraid specific scripts look for sd* & nvme*. This is what I am trying to establish now.

If you get this worked out, here's a thought.  You could put the code and Makefile in a github repo and once other issues are sorted (such as udev rules) we can look at cloning repo and adding to Unraid OS.  However, if a newer kernel comes along and now driver won't build, we'll have no choice but to file an Issue in the repo and omit the driver until issue is resolved.

Link to comment

I am not a developer hence have no experience in packaging that stuff ;) Tell me what is needed to progress on this even as a test case. This is totally mock-up manual procedure I have. I was even able to convert debian package to txz and have fio-utils to manage the drive working too.

 

However simple tricks to create links like sds -> fioa, sds1 -> fioa1 do not make the trick. The drive does not come visible in unassigned devices or array configurator. This is why I started looking at udev mappings to convince the drive to attach as sd* rather than fio*

 

Steps to get it working as a drive visible to the OS:

 

1. You need kernel source first 4.19.88 (linux-4.19.88.tar.xz) & the packages from slackware 14.2

- dkms (and all its deps like ncurses in txz format, easy to install with installpkg)

- dev tools plugin to be able to compile

2. Copy .config from existing kernel source to the new one. If you don't set the kernel name correctly which .config does you will end up with insmod complaining about mismatch after io driver build

3. New kernel will be in /usr/src/linux-4.19.88 you need to temporarily move exiting one to old and new one to /usr/src/linux-4.19.88-Unraid so all headers are available for compiling, there is no need to load new kernel at all

4. You need the driver https://github.com/snuf/iomemory-vsl

- Follow dkms steps on GitHub to compile the driver

5. Since /lib/modules is a ro/file as squashfs I took the easy path of copying compiled .ko module to /boot and do /boot/config/go insmod /boot/iomemory-vsl.ko.xz. This is not mature enough to go through building new bzmodules.

6. Register to san disk web page and download fio-util_3.2.15.1700-1.0_amd64.deb, deb2tgz will give txz which you can install with installpkg

 

This is not doing any tricks:

ln -s /dev/fioa /dev/sds
ln -s /dev/fioa1 /dev/sds1
ln -s /dev/sds /dev/disk/by-id/iodrive
ln -s /dev/sds1 /dev/disk/by-id/iodrive-part1

 

Not on lsscsi

root@unRAID:/usr/src# lsscsi
[0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda
[1:0:0:0]    disk    SEAGATE  ST1000NX0323     K002  /dev/sdb
[1:0:1:0]    disk    SEAGATE  ST1000NX0323     K002  /dev/sdc
[1:0:2:0]    disk    SEAGATE  ST1000NX0323     K002  /dev/sdd
[1:0:3:0]    disk    SEAGATE  ST1000NM0023     0004  /dev/sde
[1:0:4:0]    disk    SEAGATE  ST1000NM0023     0004  /dev/sdf
[1:0:5:0]    disk    ATA      CT500MX500SSD1   022   /dev/sdg

 

but still there and working

root@unRAID:/boot/files# mount /dev/fioa1 /tmp/tmp/

root@unRAID:/boot/files# touch /tmp/tmp/test

root@unRAID:/boot/files# ls -la /tmp/tmp/
total 16
drwxr-xr-x  1 root root   8 Feb 16 21:30 ./
drwxrwxrwt 12 root root 240 Feb 16 21:30 ../
-rw-rw-rw-  1 root root   0 Feb 16 21:30 test

 

root@unRAID:/boot/files# blkid
/dev/fioa1: UUID="2b299144-f28d-4c06-9884-483586007b02" UUID_SUB="f4b16ba2-5ac7-4162-be24-ee0022d7f0c2" TYPE="btrfs"

 

root@unRAID:/boot/files# fio-status

Found 1 ioMemory device in this system with 1 ioDrive Duo
Driver version: 3.2.15 build 1700

Adapter: Dual Adapter
        640GB High IOPS MLC Duo Adapter for IBM System x, Product Number:81Y4517, SN:90438
        External Power: NOT connected
        PCIe Power limit threshold: 24.75W
        Connected ioMemory modules:
          fct0: Product Number:81Y4517, SN:74486

fct0    Attached
        IBM ioDIMM 320GB, SN:74486
        Located in slot 0 Upper of ioDrive Duo HL SN:90438
        PCI:15:00.0
        Firmware v7.1.17, rev 116786 Public
        320.00 GBytes device size
        Internal temperature: 43.31 degC, max 44.30 degC
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
        Contained VSUs:
          fioa: ID:0, UUID:e6468008-1eeb-439d-addf-624d70706c56

fioa    State: Online, Type: block device
        ID:0, UUID:e6468008-1eeb-439d-addf-624d70706c56
        320.00 GBytes device size

 


[  154.804222] <6>fioinf ioDrive 0000:15:00.0: Found device fct0 (640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0) on pipeline 0
[  155.671053] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: probed fct0
[  155.675414] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: sector_size=512
[  155.675420] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: setting channel range data to [2 .. 4095]
[  155.685856] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Found metadata in EBs 2698-3028, loading...
[  155.805497] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: setting recovered append point 3028+96796672
[  155.880346] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Creating device of size 320000000000 bytes with 625000000 sectors of 512 bytes (317861 mapped).
[  155.881689] fioinf enable_discard set but discard not supported on this linux version
[  155.881701] fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Creating block device fioa: major: 254 minor: 0 sector size: 512...
[  155.881984]  fioa: fioa1
[  155.882422] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Attach succeeded.
[  158.627521] md: unRAID driver 2.9.13 installed

 

[23744.304519] BTRFS: device fsid 2b299144-f28d-4c06-9884-483586007b02 devid 1 transid 5 /dev/fioa1
[23747.038336] BTRFS info (device fioa1): disk space caching is enabled
[23747.038338] BTRFS info (device fioa1): has skinny extents
[23747.038338] BTRFS info (device fioa1): flagging fs with big metadata feature
[23747.040864] BTRFS info (device fioa1): checking UUID tree

Edited by mmx01
Link to comment

Looks like either mdcmd or another process waits for attach message for a device sd* or nvme* which fio* is not part of.

 

It is able to report in tools/preclear though

 

Preclear Disk|||

DeviceIdentificationTemp.SizeLogPreclear Status

fioa_*320 GBDisk mounted

 

How array configurator is enumerating drives? by-id is not the way as otherwise it should pop-up

 

root@unRAID:/boot/files# ls /dev/disk/by-id/
ata-CT500MX500SSD1_1832E14D21D9@        usb-SanDisk_Cruzer_Blade_4C530001280726115430-0:0@
ata-CT500MX500SSD1_1832E14D21D9-part1@  usb-SanDisk_Cruzer_Blade_4C530001280726115430-0:0-part1@
iodrive@                                wwn-0x5000c50062e19a17@
iodrive-part1@                          wwn-0x5000c50062e19a17-part1@
scsi-35000c50062e19a17@                 wwn-0x5000c5006343d327@
scsi-35000c50062e19a17-part1@           wwn-0x5000c5006343d327-part1@
scsi-35000c5006343d327@                 wwn-0x5000c5009e743a3f@
scsi-35000c5006343d327-part1@           wwn-0x5000c5009e743a3f-part1@
scsi-35000c5009e743a3f@                 wwn-0x5000c5009e743afb@
scsi-35000c5009e743a3f-part1@           wwn-0x5000c5009e743afb-part1@
scsi-35000c5009e743afb@                 wwn-0x5000c5009e743f43@
scsi-35000c5009e743afb-part1@           wwn-0x5000c5009e743f43-part1@
scsi-35000c5009e743f43@                 wwn-0x500a0751e14d21d9@
scsi-35000c5009e743f43-part1@           wwn-0x500a0751e14d21d9-part1@

Edited by mmx01
Link to comment
  • 2 weeks later...
  • 2 weeks later...
  • 4 weeks later...

I also got two of these for cache drives and would love to be able to use them, This is really my only choice to have a cache drive other than loosing storage for my array since all my drive bays are full of mechanical drives. I'm new to unraid so not much help but I just bought this for this purpose and was really disappointed when finding out they are not supported.

  • Like 1
Link to comment
On 2/16/2020 at 2:57 PM, mmx01 said:

Looks like either mdcmd or another process waits for attach message for a device sd* or nvme* which fio* is not part of.

 

It is able to report in tools/preclear though

 

Preclear Disk|||

DeviceIdentificationTemp.SizeLogPreclear Status

fioa_*320 GBDisk mounted

 

How array configurator is enumerating drives? by-id is not the way as otherwise it should pop-up

 

root@unRAID:/boot/files# ls /dev/disk/by-id/
ata-CT500MX500SSD1_1832E14D21D9@        usb-SanDisk_Cruzer_Blade_4C530001280726115430-0:0@
ata-CT500MX500SSD1_1832E14D21D9-part1@  usb-SanDisk_Cruzer_Blade_4C530001280726115430-0:0-part1@
iodrive@                                wwn-0x5000c50062e19a17@
iodrive-part1@                          wwn-0x5000c50062e19a17-part1@
scsi-35000c50062e19a17@                 wwn-0x5000c5006343d327@
scsi-35000c50062e19a17-part1@           wwn-0x5000c5006343d327-part1@
scsi-35000c5006343d327@                 wwn-0x5000c5009e743a3f@
scsi-35000c5006343d327-part1@           wwn-0x5000c5009e743a3f-part1@
scsi-35000c5009e743a3f@                 wwn-0x5000c5009e743afb@
scsi-35000c5009e743a3f-part1@           wwn-0x5000c5009e743afb-part1@
scsi-35000c5009e743afb@                 wwn-0x5000c5009e743f43@
scsi-35000c5009e743afb-part1@           wwn-0x5000c5009e743f43-part1@
scsi-35000c5009e743f43@                 wwn-0x500a0751e14d21d9@
scsi-35000c5009e743f43-part1@           wwn-0x500a0751e14d21d9-part1@

Are you saying you were able to get your fusionio drive working in unraid using the steps above? Are you using it as a cache drive, and is it working as expected?

 

Thanks

Link to comment
  • 2 weeks later...
  • 1 month later...
  • 2 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.