Fusion ioDrive as cache drive


mmx01

Recommended Posts

Hi All,

 

I realize this is not officially supported hence looking only for pointers. I have ioDrive 640GB version (2x320GB) and wanted to make use of it. Took some effort to have the driver compiled but here I am. The drive is recognized by the OS, I was able to partition and mount it.

 

There is however no way I can convince unraid array configurator to show the drive as either data or cache drive. Any suggestion how this is done would be really appreciated. Is it that configurator looks for /dev/s*?

 

It shows as block device fioa1 (partition 1 on fioa). I load it at boot before md start.

 

Regards,

Mariusz

 

root@unRAID:~# mount /dev/fioa1 /tmp/
root@unRAID:~# cd /tmp/
root@unRAID:/tmp# ls -la
total 16
drwxr-xr-x  1 root root   0 Feb 14 19:24 ./
drwxr-xr-x 19 root root 400 Feb 14 18:22 ../
root@unRAID:/tmp# touch test
root@unRAID:/tmp# ls -la
total 16
drwxr-xr-x  1 root root   8 Feb 14 19:24 ./
drwxr-xr-x 19 root root 400 Feb 14 18:22 ../
-rw-rw-rw-  1 root root   0 Feb 14 19:24 test

 

root@unRAID:/tmp# blkid
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="272C-EBE2" TYPE="vfat"
/dev/sdb1: UUID="c157fef1-c037-4f8b-95c0-de093aeb6fe1" TYPE="xfs" PARTUUID="7efc64e8-e3d3-4f61-b1c3-eb6548190f19"
/dev/sdc1: UUID="6c8dddbd-03dd-40f6-8398-b708d100b5c8" TYPE="xfs" PARTUUID="d1c2a76a-2495-458d-94ba-96b2be254cb8"
/dev/sdd1: UUID="54b7540e-f3db-43c3-9d21-e81458e64e65" TYPE="xfs" PARTUUID="80e3ca13-ea97-44fd-bad1-14f1bc902128"
/dev/sde1: UUID="62dbd8ed-5621-4eb4-bc5a-ceb565864819" TYPE="xfs"
/dev/fioa1: UUID="ba5ea1c5-3f8d-4a99-b47b-5648af952df3" UUID_SUB="18fb9b57-2a27-4aba-b3b5-8c8f54befd29" TYPE="btrfs" PARTUUID="0da74ca9-01"

 

root@unRAID:~# fio-status

Found 1 ioMemory device in this system with 1 ioDrive Duo
Driver version: 3.2.15 build 1700

Adapter: Dual Adapter
        640GB High IOPS MLC Duo Adapter for IBM System x, Product Number:81Y4517, SN:90438
        External Power: NOT connected
        PCIe Power limit threshold: 24.75W
        Connected ioMemory modules:
          fct0: Product Number:81Y4517, SN:74486

fct0    Attached
        IBM ioDIMM 320GB, SN:74486
        Located in slot 0 Upper of ioDrive Duo HL SN:90438
        PCI:15:00.0
        Firmware v7.1.17, rev 116786 Public
        320.00 GBytes device size
        Internal temperature: 44.30 degC, max 44.30 degC
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
        Contained VSUs:
          fioa: ID:0, UUID:e6468008-1eeb-439d-addf-624d70706c56

fioa    State: Online, Type: block device
        ID:0, UUID:e6468008-1eeb-439d-addf-624d70706c56
        320.00 GBytes device size

 

 

[  151.212896] iomemory_vsl: loading out-of-tree module taints kernel.
[  151.215150] <6>fioinf VSL configuration hash: 50cc3bdba9fe52b90d1821e59d81452e4a6eac09
[  151.215190] <6>fioinf
[  151.215191] <6>fioinf Copyright (c) 2006-2014 Fusion-io, Inc. (acquired by SanDisk Corp. 2014)
[  151.215191] <6>fioinf Copyright (c) 2014-2016 SanDisk Corp. and/or all its affiliates. All rights reserved.
[  151.215192] <6>fioinf For Terms and Conditions see the License file included
[  151.215192] <6>fioinf with this driver package.
[  151.215192] <6>fioinf
[  151.215193] <6>fioinf ioDrive driver 3.2.15.1700 pinnacles@3dd0050df54c loading...
[  151.215549] iodrive 0000:15:00.0: enabling device (0000 -> 0002)
[  151.216259] <6>fioinf ioDrive 0000:15:00.0: mapping controller on BAR 5
[  151.216505] <6>fioinf ioDrive 0000:15:00.0: MSI enabled
[  151.216514] <6>fioinf ioDrive 0000:15:00.0: using MSI interrupts
[  151.246696] resource sanity check: requesting [mem 0x000e0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e4000-0x000e7fff window]
[  151.246715] caller find_slot_number_bios+0x35/0x13a [iomemory_vsl] mapping multiple BARs
[  151.246824] resource sanity check: requesting [mem 0x000e0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e4000-0x000e7fff window]
[  151.246836] caller find_slot_number_bios+0x35/0x13a [iomemory_vsl] mapping multiple BARs
[  151.246877] <6>fioinf ioDrive 0000:15:00.0.0: Starting master controller
[  152.101641] <6>fioinf ioDrive 0000:15:00.0.0: Adapter serial number is 90438
[  152.956551] <6>fioinf ioDrive 0000:15:00.0.0: Board serial number is 74486
[  152.956554] <6>fioinf ioDrive 0000:15:00.0.0: Default capacity        320.000 GBytes
[  152.956555] <6>fioinf ioDrive 0000:15:00.0.0: Default sector size     512 bytes
[  152.956556] <6>fioinf ioDrive 0000:15:00.0.0: Rated endurance         4.00 PBytes
[  152.956557] <6>fioinf ioDrive 0000:15:00.0.0: 85C temp range hardware found
[  152.961062] <6>fioinf ioDrive 0000:15:00.0.0: Firmware version 7.1.17 116786 (0x700411 0x1c832)
[  152.961063] <6>fioinf ioDrive 0000:15:00.0.0: Platform version 10
[  152.961063] <6>fioinf ioDrive 0000:15:00.0.0: Firmware VCS version 116786 [0x1c832]
[  152.961071] <6>fioinf ioDrive 0000:15:00.0.0: Firmware VCS uid 0xaeb15671994a45642f91efbb214fa428e4245f8a
[  152.963680] <6>fioinf ioDrive 0000:15:00.0.0: Powercut flush: Enabled
[  153.064478] <6>fioinf ioDrive 0000:15:00.0.0: PCIe power monitor enabled (master). Limit set to 24.750 watts.
[  153.064480] <6>fioinf ioDrive 0000:15:00.0.0: Thermal monitoring: Enabled
[  153.064482] <6>fioinf ioDrive 0000:15:00.0.0: Hardware temperature alarm set for 85C.
[  153.073850] <6>fioinf ioDrive 0000:15:00.0: Found device fct0 (640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0) on pipeline 0
[  154.001784] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: probed fct0
[  154.004480] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: sector_size=512
[  154.004486] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: setting channel range data to [2 .. 4095]
[  154.015829] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Found metadata in EBs 3541-485, loading...
[  154.139779] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: setting recovered append point 485+96796672
[  154.216963] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Creating device of size 320000000000 bytes with 625000000 sectors of 512 bytes (9144 mapped).
[  154.218287] fioinf enable_discard set but discard not supported on this linux version
[  154.218296] fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Creating block device fioa: major: 254 minor: 0 sector size: 512...
[  154.218547]  fioa: fioa1
[  154.218721] <6>fioinf 640GB High IOPS MLC Duo Adapter for IBM System x 0000:15:00.0: Attach succeeded.
[  156.583510] md: unRAID driver 2.9.13 installed

Link to comment
  • 2 weeks later...

I was wondering the same thing.  I have yet to try it myself.  I purchased a iodrive2 1200 HP branded off eBay last week to try in one of my esxi servers.  1.2tb for $100 solved by problem of installing ssds in the 8 SFF bays in a DL380 G7 with the fans ramping to 75%.  would love to use 1 or two of these in my unRaid server for cache.  I need to purchase a couple more and I guess i will try it out and report back in a couple weeks.

Link to comment

Years ago I was able to get my iodrive2’s to work but it was not solid, and everything was a nightmare. I think this was on 5.x and haven’t tried since them, I still have like 8 of them to play with but need to pull them out of the r720xd’s in storage. Do you have any other options or are you destined to use the fusion IO card?

Link to comment

*Update*  Completely unrelated to unRAID but I was able to get my 1205GB HP inDrive2 to work with ESXI on my DL380 G7 and the performance boost using it as a local datastore instead of 8 sas drives in raid 10.  Also noticed a drop in power draw from the server.  :D

 

In regards to my unRAID server,  I personally don't NEED a ioDrive to work in unRAID as a cache disk.  I already have 2 sata ssds raided in a 2.5" hotswap cage in my Poweredge T410.  I want to do it just because i guess.  It would free up a 3.5" drive bay that I could use for a array disk location in a hot swap cage.  

 

I need to order a couple more ioDrives for my ESXI servers so I may just give this a try when my next order arrives.  

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.