• [6.12.4] Adding a cache (L2ARC) to a ZFS-Disk in the unRAID-Array makes the disk unmountable on Array-Startup.


    madejackson
    • Solved Minor

    Basically the title.

     

    I added a L2ARC cache to my ZFS-disk in the array (zpool add [pool name] cache [disk identifier])

     

    Upon restarting the array, the disk goes unmountable.

     

    Removing the cache disk fixes the issue (zpool remove [pool name] [device name])




    User Feedback

    Recommended Comments

    Make sure you are doing it like this, since it's not yet officially supported on v6.12, it should be on v6.13, if you are doing it like that post the diagnostics and the output of

    zpool import

    with the array stopped.

    Link to comment

    Oops, missed that it was an array disk, that is not supported, and AFAIK there are no plans to support that, you can use that in a pool.

    Link to comment

    Thanks, though you're talking about unRAID-pools, right? I have the issue with ZFS-devices in my unRAID-Array, not in the pools.

     

    Edit: Ah you were faster :) Thx for the reply, let's see if I can come up with some workaround.

    Edited by madejackson
    • Thanks 1
    Link to comment

    So if anyone runs across this, I found a workaround:

     

    I added two scripts which one of each run at start / stop of the array.

    on start, it waits 300s and the attaches the L2ARC-partitions to all my ZFS-disks.

    on stop, it removes the same cache-disks from the zpools again, making the zpools mountable again.

     

    You need to edit the scripts for your disks accordingly:

     

    L2ARC enable:

     

    #!/bin/bash
    
    sleep 300
    zpool add disk1 cache /dev/disk/by-id/<cache-disk>-part1
    zpool add disk2 cache /dev/disk/by-id/<cache-disk>-part2
    #repeat as often as nececary
    #zpool add diskX cache /dev/disk/by-id/<cache-disk>-partX

     

    L2ARC disable:

     

    #!/bin/bash
    
    zpool remove disk1 /dev/disk/by-id/<cache-disk>-part1
    zpool remove disk2 /dev/disk/by-id/<cache-disk>-part2
    #repeat as often as nececary
    #zpool remove diskX /dev/disk/by-id/<cache-disk>-partX

     

    OT: I first partitioned the SSD into evenly sized 14x partitions to use those for my 14x disks. You can also use seperate or multiple disks for L2ARC if you like.

    grafik.png

    Edited by madejackson
    Link to comment

    Are you still running this setup? Can this replace using a cache pool?

    I would appreciate hearing about the experience you had adding L2ARC to a ZFS array disk.

    Link to comment
    2 hours ago, 10bn said:

    Are you still running this setup? Can this replace using a cache pool?

    I would appreciate hearing about the experience you had adding L2ARC to a ZFS array disk.

    not really no. I replaced my machine and also removed those l2arc disks in the process. I got to the conclusion that zfs was a bad idea afterall as it eats too much of my memory. I am at +20GB RAM usage just from my 55TB-Array with ZFS-Disks. I am going to switch back to XFS for not so mission critical files = 13x XFS, 1x ZFS + 2x ZFS SSD's

     

    Link to comment
    16 hours ago, madejackson said:

    not really no. I replaced my machine and also removed those l2arc disks in the process. I got to the conclusion that zfs was a bad idea afterall as it eats too much of my memory. I am at +20GB RAM usage just from my 55TB-Array with ZFS-Disks. I am going to switch back to XFS for not so mission critical files = 13x XFS, 1x ZFS + 2x ZFS SSD's

     

    You could have limited the ZFS Memory use? How was L2Arc working in combination with the arraydisks? 

    Link to comment

    Not sure if aware but ZFS in the main array is kind of pointless as no bit rot protection. It will let you know there is a problem but cannot fix it (where as zfs pools can).

     

    You can do ZFS receive tho to receive snapshots from a ZFS pool.

    I keep trying to add this info to the documentation but no one is merging it so..

    Edited by dopeytree
    Link to comment

    Just stumbled upon this post. I just added your scripts to my Array start and stop to add/remove a cache device to my ZFS pool - not array. Unraid cannot do this natively, I guess. And I don't want to confuse Unraid with its disk management.

     

    ZFS itself does not use much memory, but it tries to cache a lot. And the cache eats memory. But hey, why don't you wanna have unused memory? ;)

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.