ZFS plugin for unRAID


steini84

Recommended Posts

Is there a way to import pools from TrueNAS 12.2?

Plugin shows no pools.

Would rather run my pools direct in unRAID instead of passing through my HBA into a truenas VM.

Cheers!

Nevermind....

Eventually figured it out.

Had to import manually using the pool name...

 

Now i just wish i knew how to pass the pool mnt's through to a VM...


 

Edited by theonewithin
Link to comment

@theonewithinUnraid doesn't work like FreeNAS because ZFS isn't supported (yet).  I assume that you're trying to connect to datasets.  Personally I've only used image files on ZFS on unraid, which work extremely well.  If you want to try a dataset because that's how all of yours are, I suspect you will have to look into the whole disk-by-id or something like that.  Otherwise it is possible to convert them to an image using qemu convert.

Link to comment
  • 2 weeks later...

I was actually able to import the pools fine and access the data just as you would any mounted drive.

But i made a mistake of where i mounted the pools and they now overlap and don't work...

Have to figure out how to stop it from mounting them on startup.

Any tips?

Ok got that sorted actually... Just imported the pools back into TrueNAS and Unraid allowed me to import them manually.

Got rid of all my VM's and moved to dockers so now should be nice and easy to get this to work...

 

Edited by theonewithin
  • Like 1
Link to comment

Has anyone ran into any issues lately trying to get Plex to run when the config is point to your zfs pool?

No matter what container I try, binhex, linuxserver, the official one, I can not get plex to run. I get the same error when the container tries to start.

 

Error: Unable to set up server: sqlite3_statement_backend::prepare: no such table: schema_migrations for SQL: select version from schema_migrations order by version (N4soci10soci_errorE)

 

If I change the app /config path to just a share on the array, it works instantly.

 

I've done some searching and stumbled across something that sqlite3 doesn't like the fuse file system and thus needs to be pointed to a specific disk or the cache drive.

 

I can't imagine no one else uses plex on zfs so I'm not sure if I am doing something wrong. I had Jellyfin running previously without issue but am going to migrate back to Plex but am having this issue.

 

I am setup similar to the main post. Zpool called "Engineering" that is under /mnt. I then have /Docker and /VMs as datasets under that for each container and VM I am running.

 

So far only Plex is giving me this issue when I try to point the config path to /mnt/Engineering/Docker/Plex

 

Any help or thoughts?

Link to comment

I run every docker from zfs filesystems.
Just make sure acces mode of the zfs paths you add to plex is set to "read/write slave"
You can reach that setting if you click on the edit button next to the path you define in the docker settings.
For any non array filesystem you set it like that.

Link to comment
3 hours ago, glennv said:

I run every docker from zfs filesystems.
Just make sure acces mode of the zfs paths you add to plex is set to "read/write slave"
You can reach that setting if you click on the edit button next to the path you define in the docker settings.
For any non array filesystem you set it like that.

 

Sadly I have tried that since I saw you suggest that previously. Still the same error.

 

Here is the docker run command

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-plex' --net='host' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'TRANS_DIR'='/config/transcode' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Holodeck/':'/media':'rw,slave' -v '/mnt/Engineering/Docker/Plex/':'/config':'rw' 'binhex/arch-plex'

 

Here are my binhex-plexpass docker settings when setting it up new.

1073349692_unraidsettingsplexzfs.thumb.png.dac14c0c29ddc21afd5a7776c816f419.png

 

/mnt/user/Holodeck is a standard share on Unraid with all my media.

/mnt/Engineering/Docker/Plex/ is a zfs data set created with zfs create Engineering/Docker/Plex

 

I've tried destroying the dataset and completely removing the Plex files, redownloading the container from scratch, server reboot, reinstalling the zfs plugin, and trying other plex containers.

 

No matter what, same exact error saying SQLite3 has no such table.

 

I've been on zfs for the past 6-12 months I believe and so far have never had this issue. Jellyfin worked fine with the same setup that I am trying to use for Plex so I'm not sure if there is something else going on that is causing this.

 

Edit:

I should also mention in the above setup, I have the media folder as read/write - slave. I have tried with the zfs config directory as read/write - slave as well to the same effect.

Edited by Koreican
Link to comment

I just deleted my docker.img and let that rebuild and same issue. For funsies I re-setup Jellyfin without issue an didn't even have to do Read/Write - Slave.

 

It seems like this is some weird issue I am getting with either Plex and their SQLite usage and zfs.

 

For reference I am also on Unraid 6.9.2

Link to comment
53 minutes ago, Koreican said:

I just deleted my docker.img and let that rebuild and same issue. For funsies I re-setup Jellyfin without issue an didn't even have to do Read/Write - Slave.

 

It seems like this is some weird issue I am getting with either Plex and their SQLite usage and zfs.

 

For reference I am also on Unraid 6.9.2

 

I have a very similar setup to you (nested datasets for appdata and then individual dockers), I've never run in to this issue.

 

I just checked and it's using read/write - I'm not aware of that causing any issues either?

 

Are there any existing files in the plex appdata folder that you've copied from elsewhere?  Could it be permissions related?

 

chown -R nobody:users /mnt/Engineering/Docker/Plex

 

Same issue with a empty /mnt/Engineering/Docker/Plex/ folder owned by nobody?

 

Link to comment

Every test I have done has been after doing a zfs destroy on the dataset where the containers are installing to so it's fresh every time.

 

I can also confirm permissions are being set as they should be. I can see all the files get created when the container sets up and the library file is there, but it seems like it can't create the table it's looking for for some odd reason.

 

I am at the point that maybe I will try a fresh install of the OS and see if there is some underlying issue going on.

Link to comment

This person quite recently is having the same issue on ARM version of plex: https://forums.plex.tv/t/failed-to-run-packege-service-i-tried-many-solutions-but-did-not-work/726127/29

 

In the end they used a different version of Plex to do the install.  Might be worth forcing an older version of the plex docker?

 

edit: Failing that it might be worth a post on Plex forums with a reference to the above thread, noting that you seem to have the same issue on x86 docker.  The few issues I've had with Dockers and ZFS seem to involve applications doing direct storage calls that ZFS doesn't support.  Maybe the latest versions of Plex are doing something similar during new installs?

 

Have you also tried to copy a working database created on non-ZFS storage?

Edited by jortan
Link to comment

Hmm, so I just did a complete fresh install of the Unraid OS, reassigned my data/parity drives as normal, installs ZFS plugin, and then tried binhex-plex and same issue.

 

So yeah something seems up with either the newer Plex versions being pulled or some weird interaction with zfs.

 

Since this is fresh, it could be something weird that it's not making the tables right but for people who already had a database made and running, it doesn't matter since the tables were already there.

 

I'll start tinkering with trying to get an older build of Plex installed.

Link to comment

Alright got it working.

 

Using binhex-plexpass as my container, I went back and installed previous version 1.23.0.4438-1-01 which is from 3 months ago. I just picked a random version and that seemed like the first 1.23.x.

 

During docker setup I set repository to binhex/arch-plexpass:1.23.0.4438-1-01 to pull that version and Plex setup fine without error.

 

I then shutdown the container, removed the tag so Plex would update to the latest 1.23.5.4801-1-01, restarted the container, and volia no issues with the database and it works perfectly. Confirmed in the Plex web interface that current version is indeed 1.23.5.4801 which looks like it came out 5 days ago.

 

I also went ahead and set my /config directory to "Read/Write - Slave".

 

Hopefully this helps anyone going forward if they are doing a fresh install with ZFS and Plex.

 

It does seem like something must have recently changed where that table isn't being created properly so it's fine if you were previously running Plex but on new installs, it seems to prevent Plex from even getting started.

 

Maybe if I have some time I will go through and see at which point the install falls and submit a report to Plex so they can investigate further.

 

Edit:

Also thanks for the suggestions and help everyone!

Edited by Koreican
Link to comment

I need to do a zpool replace but what is the syntax for using with unRaid?  I'm not sure how to reference the failed disk?

 

I need to replace the failed disk without trashing the ZFS mirror.  A 2 disk mirror has dropped one device.  Unassigned devices does not even see the failing drive at all any  more.  I rebooted and swapped the slots for these 2 mirrored disks, and the same problem remains.  The failure follows the missing disk. 

 

zpool status -x
  pool: MFS2
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 03:03:13 with 0 errors on Thu Apr 29 08:32:11 2021
config:

        NAME                     STATE     READ WRITE CKSUM
        MFS2                     DEGRADED     0     0     0
          mirror-0               DEGRADED     0     0     0
            3739555303482842933  FAULTED      0     0     0  was /dev/sdf1
            sdf                  ONLINE       0     0     0

 

My ZFS had 2 x 3tb Seagate spinning rust for VM and Docker.  BothVM's and Docker seem to continue to work with a failed drive, but aren't working as fast.  I have installed another 3tb drive that I want to replace the failed drive with.   


Here is the unassigned devices with the current ZFS disk and a replacement disk that was in the array previously. 

 

image.thumb.png.e42e781e4d5d48f7ddd8a73a1324c9bd.png

 

I will do the zpool replace but what is the syntax for using with unRaid?  Is it

 

zpool replace 3739555303482842933 sdi

 

Edited by tr0910
Link to comment

besides using full path for actual drives (so the new replacement drive sdi ) it also needs the pool name.
so : zpool replace poolname origdrive newdrive.
Origdrive can be a funny name as you see when the actual drive is gone
I would advice to always adress drives instead by the /dev/disk/by-id/xxxxxx adress. Go to that directory and you will find your drives and correct id there.
These unique id's will never change while these /dev/sd? identifiers can change after a boot or when adding removing drives.
Prevents accidentaly wiping the wrong drive .

You can check the status of the replacement with zpool status.
Will take a while obviously...

Link to comment
 zpool replace poolname origdrive newdrive

 

Just to clarify, "origdrive" refers to whatever identifier ZFS currently has for the failed disk. So yes, this is 3739555303482842933 (ZFS id, apparently the drive located here has failed to the point where it wasn't assigned a /dev/sdx device)

 

So the command should be

 

zpool replace MFS2 3739555303482842933 sdi

 

As long as you understand that these are how you refer to drives when replacing disks using zpool, there's not much chance of replacing the wrong drive:

image.png.f82b7747e5389c93ce1d1aa94b2689f0.png

 

4 hours ago, glennv said:

I would advice to always adress drives instead by the /dev/disk/by-id/xxxxxx

 

I understand that's a common recommendation, but in my experience I just reference the normal drive locations /dev/sda, /dev/sdb.  ZFS never seems to have any issue finding the correct disks, even when the order has changed.

 

In my array,. ZFS has switched by itself to using drive id's - this may only occurs if the disk order has changed?

image.png.2a614a6f2af1ee1ff64ee116eaea6733.png

 

Link to comment

 

10 hours ago, glennv said:

The common recomendation is not for zfs to not get confused but for the humans operating the system.

 

It's because ZFS pools might not import on startup if the device locations have changed:

https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#selecting-dev-names-when-creating-a-pool-linux

 

My not having any issues with this might be down to the fact that unRAID doesn't have a persistent zpool.cache (as far as I know).  To each their own!

Link to comment

@glennv @jortan I have installed a drive that is not perfect and started the resilvering (this drive has some questionable sectors).  Might as well start with a worst possible case and see what happens if resilvering fails. (grin)

 

I have docker and VM running from the degraded mirror while the resilvering is going on.  Hopefully this doesn't confuse the resilvering.

 

How many days should a resilvering take to complete on a 3tb drive?  Its been running for over 24 hours now.

zpool status
  pool: MFS2
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Mon Jul 26 18:28:07 2021
        545G scanned at 4.30M/s, 58.6G issued at 473K/s, 869G total
        49.1G resilvered, 6.75% done, no estimated completion time
config:

        NAME                       STATE     READ WRITE CKSUM
        MFS2                       DEGRADED     0     0     0
          mirror-0                 DEGRADED     0     0     0
            replacing-0            DEGRADED     0     0     0
              3739555303482842933  FAULTED      0     0     0  was /dev/sdf1
              sdi                  ONLINE       0     0     0  (resilvering)
            sdf                    ONLINE       0     0     0

errors: No known data errors

 

Edited by tr0910
Link to comment
545G scanned at 4.30M/s, 58.6G issued at 473K/s, 869G total

 

This should give you some idea - 869G allocated in the array, 545G has been scanned and 58.6G written to the replacement disk so far.

 

Hopefully this doesn't confuse the resilvering.

 

Won't cause any problems, but it will slow down the resilvering process.  There are some zfs tunables you can modify to change the io priority, but safest thing is probably just to let it complete.  Consider turning off any high-io VMs/dockers that you don't need to have running.

 

 

Link to comment

As Jortan mentioned, a non perfect ssd will slow this process down. On a healthy ssd of that size this only takes a few hours (or less) . Heck even spinning rust , which i did last week to replace 4 TB from a zfs mirror used for backups took less then half a day. So maybe time to invest in a few new ssd's ;-)

Also running acvtive VM's etc on it will not help the speed.

 

edit: when rereading i see you are talking normal drives and not ssd's. Sorry for that. Still pretty slow imho so same advice eg get a nice fresh smelling new drive when replacing bad drives. Dont replace bad with questionable unless you like to live on the edge
Running VM's while resilvering on normal drives is about as worst case as you can get so ut be patient. Should finish in a week or so ;-);-)

Edited by glennv
Link to comment
3 hours ago, glennv said:

 Should finish in a week or so ;-);-)

Yep, and just as it passed 8% we had a power blink from a lightning storm and I intentionally did not have this plugged into the UPS.  It failed gracefully, but restarted from zero.  I have perfect drives that I will replace this with, but why not experience all of ZFS quirks since I have a chance.  If the drive fails during resilvering I won't be surprised.  If ZFS can manage resilvering without getting confused on this dingy harddrive, I will be impressed.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.