Jump to content

jonnygrube

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by jonnygrube

  1. JorgeB-
    Thank you for your help, I seem to have made a series of ill advised moves here.
    I thought removing the drive would allow me to remove the R1 and fix the read errors that i'm getting when trying to copy data off it.
    It did not, so i readded it. See below for updated diagnostics.

    For background, the 1.6TB one that's completely full had ~80TB free prior to trying to R1 on single mode.  I have no idea why it's reporting full or why the GUI is reporting 757GBused/212MBfree yet 2.4TB in total size. I keep trying to pull any files off it with midnight commander, but i keep getting readonly and io errors.

    My goal is to pull the critical files and VM's off the 1.6TB and restore from an appdata backup.  How should I proceed?

    diagnostics-20230518-1448.zip

  2. I tried to expedite a cache upgrade from a 1.6TB SAS to a 3.2TB F320 NVMe by adding the NVMe to the pool, then changing btfrs to a R1 (with the idea that I'd remove the 1.6TB once cloned and go back to single mode).
    Well, that did not work as planned.

    I've tried to remove the F320 NVMe and try to convert back to single mode with no luck, it errors on something?
    I've tried to scrub the cache, but that quickly errors out in both cache drives populated and with the nvme removed.
    I've also tried to revert back to the safe method of changing cache shares from prefer to yes and running the mover, that also errors out and leaves me with some empty and some full directories on my array but files still on the server.

     

    The cache drive is reporting the wrong (smaller than 1.6TB) size and isn't allowing me to r/w to /mnt/cache just r/o.

    Does anybody have any suggestions on how to recover the cache or proceed with the upgrade?

    I've attached the diagnostic files

    diagnostics-20230517-1720.zip

  3. As a technical note, FusionIO/WesternDigital are still supporting their products with drivers current as of 01/30/2020. 


    Energen....these are still great cards, and hella not obsolete for this application.

    I have a ioDrive2 and a SX350 that can do ~2GB/s+ R/W at 8W idle through a ESXi6.7 VM with super low latency like nvme.

    If i were to make a guess, I'd say there are prolly ~50-60 of us in the forums that would integrate FusionIO prducts into our builds and at least that many more that would be inclined to buy these cheap used cards in the future for that purpose. 

    No, we aren't a majority... but it's not 5 or 6 or a dozen.  There's dozens of us...

    Tom, the CEO and admin, chimed in on this thread and put the ball in our court.
     

    On 2/16/2020 at 11:59 AM, limetech said:

    You could put the code and Makefile in a github repo and once other issues are sorted (such as udev rules) we can look at cloning repo and adding to Unraid OS.  However, if a newer kernel comes along and now driver won't build, we'll have no choice but to file an Issue in the repo and omit the driver until issue is resolved.

     

    If we want to merge this into his repo, somebody will need to work on the 5.X kernel integration.

    I would suggest following this proxmox guide as a starting point. -> https://forum.proxmox.com/threads/configuring-fusion-io-sandisk-iodrive-iodrive2-ioscale-and-ioscale2-cards-with-proxmox.54832/

    • Like 2
    • Thanks 2
    • Upvote 1
  4. 1 hour ago, itimpi said:

    Is the licensing for the fusionio drivers on Linux such that Limetech would even be allowed to distribute them as part of Unraid?    From what I have seen normally the end-user compiles themselves for themselves on their own Linux system.  I could not find the current definitive statement on the licensing terms so I could be wrong about that.

    I don't believe licensing is the issue,  these are "EOL/past their 5yr support agreement" according to FusionIO ... so they are not guaranteeing continued updates that would work with newer kernels in the future.  Since they are completely reliant on software to work, this kind of tombstones the devices for current kernels when support stops.

    That being said as of right now, new drivers are being released every couple months... with the latest released Jan 30 2020 (see attached).


    Regardless, I'm trying to figure out a way to incentivize Dave @ servethehome forums (former fusionio software dev) to pitch in.  Any and all ideas are welcome.

    On 11/22/2019 at 4:59 PM, limetech said:

    This is not correct.  We use slackware packages but we keep up with kernel development.  For example just released Unraid 6.8.0-rc7 is using latest Linux stable release 5.3.12.  Upcoming Unraid 6.9 will no doubt use kernel 5.4.x.

     

    It would be nice if these drivers were merged into mainline - ask him if that would be possible.   Otherwise a vanilla set of driver source and Makefile is all we need if he can get it to compile against latest Linux kernels.

    It doesn't look like merging these drivers into the mainline is possible now.  Is it possible to move forward with the source supplied by WD?

    fusionio.jpg

  5. 16 minutes ago, limetech said:

    This is a lot of work and may not even build on latest Linux kernels.

    True, I appreciate any time you can devote to this matter.


    As you are probably aware, these devices are high IOPS/high STR/low latency flash pcie cards with extremely long endurance ratings that can be had for <$0.08/GB now.  They are ideal for cache and VM, especially at this price point however are 100% driver/VSL dependant.

    I am in contact with a guy that worked on development and support of these cards for FusionIO/Sandisk/WD.

    Dunno if it helps, but this was his reply when asked the same thing regarding driver inclusion about the gen2/ioDriveII product:

     

    "So unraid is slackware, and as such is using a 4.x kernel right now. You'd have to go into the driver download section for fedora/etc that feature a 4.x kernel and grab the iomemory-vsl-3.2.15.1699-1.0.src.rpm that is available there.

    I'd probably stand up a development slack VM with the kernel headers/build env setup and use that to build your kernel module for the ioDrives.

    As someone has already stated, if you update unraid, that ioDrive kernel module won't load and you'll have to build a new one for your newer kernel before the drives will come back online.

    You can set stuff up with dkms to auto-rebuild on new kernel updates, but that can sometimes be a bit of a learning curve...
    -- Dave"

     

    Previously, I just dumped a directory of all support files for SLES11/12 .... is there anything I can do or ask Dave to make driver integration easier?

  6. On 11/20/2019 at 11:25 AM, limetech said:

    What's the name of the driver?

    Fusion IO (Sandisk, now owned by WD) supports RHEL; SLES; OEL; CentOS; Debian; Ubuntu

    https://link.westerndigital.com/enterprisesupport/software-download.html

     

    SUSE 11/12 current drivers and utilities:


    SX300/SX350/PX600 Linux_sles-12 driver v4.3.6 20191116 (current)
              - SRC -> iomemory-vsl4-4.3.6.1173-1.src.rpm
              - BIN -> iomemory-vsl4-3.12.49-11-default-4.3.6.1173-1.x86_64.rpm
                      -> iomemory-vsl4-4.4.21-69-default-4.3.6.1173-1.x86_64.rpm
                      -> iomemory-vsl4-4.4.73-5-default-4.3.6.1173-1.x86_64.rpm
              - Utility -> fio-preinstall-4.3.6.1173-1.x86_64.rpm
                         -> fio-sysvinit-4.3.6.1173-1.x86_64.rpm
                         -> fio-util-4.3.6.1173-1.x86_64.rpm

     

    SX300/SX350/PX600 Linux_sles-11 driver v4.3.6 20191116 (current)
              - SRC -> iomemory-vsl4-4.3.6.1173-1.src.rpm
              - BIN -> iomemory-vsl4-3.0.101-63-default-4.3.6.1173-1.x86_64.rpm
                      -> iomemory-vsl4-3.0.101-63-xen-4.3.6.1173-1.x86_64.rpm
                      -> iomemory-vsl4-3.0.76-0.11-default-4.3.6.1173-1.x86_64.rpm
                      -> iomemory-vsl4-3.0.76-0.11-xen-4.3.6.1173-1.x86_64.rpm
              - Utility -> fio-preinstall-4.3.6.1173-1.x86_64.rpm
                         -> fio-sysvinit-4.3.6.1173-1.x86_64.rpm
                         -> fio-util-4.3.6.1173-1.x86_64.rpm

     

     

    ioDrive/ioDrive2/ioDrive2Duo/ioScale Linux_sles-12 driver v3.2.16 20180912 (current)
              - SRC -> iomemory-vsl-3.2.16.1731-1.0.src.rpm
              - BIN -> iomemory-vsl-4.4.21-69-default-3.2.16.1731-1.0.x86_64.rpm
                      -> iomemory-vsl-4.4.73-5-default-3.2.16.1731-1.0.x86_64.rpm
              - Utility -> fio-common-3.2.16.1731-1.0.x86_64.rpm
                          -> fio-preinstall-3.2.16.1731-1.0.x86_64.rpm
                          -> fio-sysvinit-3.2.16.1731-1.0.x86_64.rpm
                          -> fio-util-3.2.16.1731-1.0.x86_64.rpm

     

    ioDrive/ioDrive2/ioDrive2Duo/ioScale Linux_sles-11 driver v3.2.16 20180912 (current)
              - SRC -> iomemory-vsl-3.2.16.1731-1.0.src.rpm
              - BIN -> iomemory-vsl-3.0.101-63-default-3.2.16.1731-1.0.x86_64.rpm
                      -> iomemory-vsl-3.0.101-63-xen-3.2.16.1731-1.0.x86_64.rpm
                      -> iomemory-vsl-3.0.76-0.11-default-3.2.16.1731-1.0.x86_64.rpm
                      -> iomemory-vsl-3.0.76-0.11-xen-3.2.16.1731-1.0.x86_64.rpm
              - Utility -> fio-common-3.2.16.1731-1.0.noarch.rpm
                          -> fio-preinstall-3.2.16.1731-1.0.noarch.rpm
                          -> fio-sysvinit-3.2.16.1731-1.0.noarch.rpm
                          -> fio-util-3.2.16.1731-1.0.noarch.rpm
                          -> lib32vsl-3.2.16.1731-1.i686.rpm

×
×
  • Create New...