ZFS plugin for unRAID


steini84

Recommended Posts

34 minutes ago, rinseaid said:

 

Hello, I haven't used ZFS on unRAID in quite some time - so not sure if it still works, but basically here's what you'd run in the terminal based on the instructions I provided.

 


nano /boot/config/go

 

This will open up the nano text editor. If there's anything in the file, use the cursor keys to navigate to the bottom of the file. Copy and paste the below (you can paste by right clicking and selecting paste using the unRAID web browser terminal):


#Start ZFS Event Daemon
cp /boot/config/zfs-zed/zed.rc /usr/etc/zfs/zed.d/
/usr/sbin/zed &

 

Press CTRL+x, then type 'y' and press enter. This will save the file and exit the nano text editor.

 

Type the below to create the /boot/config/zfs-zed/ directory:


 mkdir /boot/config/zfs-zed/

 

And finally, copy the default zed.rc file into the new directory


cp /usr/etc/zfs/zed.d/zed.rc /boot/config/zfs-zed/

 

You would then use nano to modify the zed.rc file if desired to use unRAID notifications or configure email notifications:


nano /boot/config/zfs-zed/zed.rc

 

Find the two lines mentioned (ZED_EMAIL_PROG and ZED_EMAIL_OPTS) and modify as required.

 

Reboot after the above changes for the settings to take effect.

 

Side note: you can also turn on SMB sharing of the flash drive from the unRAID web gui (Main -> Boot Device -> Flash -> SMB security settings -> Export yes, security public), which would allow you to access the /boot/config from SMB - e.g. \\yourserver\flash\config - you could then use whatever text editor you like to modify these files.

 

 

 

Thanks for the guide. But I keep getting this message on my system log:

 

 

 

Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=40 pid=3441 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=40 pid=3442
Jan 27 17:50:11 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=40 pid=3442 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=41 pid=3445
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=41 pid=3445 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=41 pid=3446
Jan 27 17:50:11 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=41 pid=3446 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=42 pid=3447
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=42 pid=3447 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=42 pid=3448
Jan 27 17:50:11 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=42 pid=3448 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=43 pid=3449
Jan 27 17:50:11 Tower zed: eid=43 class=config_cache_write pool='test'
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=43 pid=3449 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=44 pid=3451
Jan 27 17:50:11 Tower zed: eid=44 class=config_sync pool='test'
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=44 pid=3451 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=45 pid=3862
Jan 27 17:50:17 Tower zed: eid=45 class=scrub_start pool='test'
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=45 pid=3862 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=46 pid=3962
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=46 pid=3962 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=46 pid=3983
Jan 27 17:50:17 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=46 pid=3983 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=47 pid=3984
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=47 pid=3984 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=47 pid=3985
Jan 27 17:50:17 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=47 pid=3985 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=48 pid=3986
Jan 27 17:50:17 Tower zed: eid=48 class=scrub_finish pool='test'
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=48 pid=3986 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "scrub_finish-notify.sh" eid=48 pid=3988
Jan 27 17:50:17 Tower zed[9119]: Finished "scrub_finish-notify.sh" eid=48 pid=3988 exit=3
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=49 pid=3992
Jan 27 17:50:17 Tower zed: eid=49 class=config_sync pool='test'
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=49 pid=3992 exit=0
Jan 27 17:51:18 Tower zed[9119]: Invoking "all-syslog.sh" eid=50 pid=8254
Jan 27 17:51:18 Tower zed: eid=50 class=config_sync pool='UZFS'
Jan 27 17:51:18 Tower zed[9119]: Finished "all-syslog.sh" eid=50 pid=8254 exit=0

 

 

Is this normal? Also, I tried the scrub process using the test file, but I never got a notification popup in the Unraid  dashboard.

Link to comment
1 hour ago, tocho666 said:

 

Thanks for the guide. But I keep getting this message on my system log:

 

 

 

Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=40 pid=3441 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=40 pid=3442
Jan 27 17:50:11 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=40 pid=3442 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=41 pid=3445
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=41 pid=3445 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=41 pid=3446
Jan 27 17:50:11 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=41 pid=3446 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=42 pid=3447
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=42 pid=3447 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=42 pid=3448
Jan 27 17:50:11 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=42 pid=3448 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=43 pid=3449
Jan 27 17:50:11 Tower zed: eid=43 class=config_cache_write pool='test'
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=43 pid=3449 exit=0
Jan 27 17:50:11 Tower zed[9119]: Invoking "all-syslog.sh" eid=44 pid=3451
Jan 27 17:50:11 Tower zed: eid=44 class=config_sync pool='test'
Jan 27 17:50:11 Tower zed[9119]: Finished "all-syslog.sh" eid=44 pid=3451 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=45 pid=3862
Jan 27 17:50:17 Tower zed: eid=45 class=scrub_start pool='test'
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=45 pid=3862 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=46 pid=3962
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=46 pid=3962 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=46 pid=3983
Jan 27 17:50:17 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=46 pid=3983 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=47 pid=3984
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=47 pid=3984 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "history_event-zfs-list-cacher.sh" eid=47 pid=3985
Jan 27 17:50:17 Tower zed[9119]: Finished "history_event-zfs-list-cacher.sh" eid=47 pid=3985 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=48 pid=3986
Jan 27 17:50:17 Tower zed: eid=48 class=scrub_finish pool='test'
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=48 pid=3986 exit=0
Jan 27 17:50:17 Tower zed[9119]: Invoking "scrub_finish-notify.sh" eid=48 pid=3988
Jan 27 17:50:17 Tower zed[9119]: Finished "scrub_finish-notify.sh" eid=48 pid=3988 exit=3
Jan 27 17:50:17 Tower zed[9119]: Invoking "all-syslog.sh" eid=49 pid=3992
Jan 27 17:50:17 Tower zed: eid=49 class=config_sync pool='test'
Jan 27 17:50:17 Tower zed[9119]: Finished "all-syslog.sh" eid=49 pid=3992 exit=0
Jan 27 17:51:18 Tower zed[9119]: Invoking "all-syslog.sh" eid=50 pid=8254
Jan 27 17:51:18 Tower zed: eid=50 class=config_sync pool='UZFS'
Jan 27 17:51:18 Tower zed[9119]: Finished "all-syslog.sh" eid=50 pid=8254 exit=0

 

 

Is this normal? Also, I tried the scrub process using the test file, but I never got a notification popup in the Unraid  dashboard.

 

Unfortunately I'm not sure as it's been years since I used this. I don't recall seeing those log entries. Either zed or the unRAID notification system may have changed in the time since I last used this. Sorry I can't be of more help.

Link to comment

I bought an Optane 900p 280GB Nvme and added it to the zpool as a SLOG device, but when I do  benchmarks or drag files using SMB, the "zpool iostat -v" only shows the Optane doing ~100MB/sec writes. I know I can get ~2000MB/sec writes with the drive by itself. I've set compression=lz4, atime=off, sync=always. I've even went and modified the sysctl.conf parameters and nothing seems to work. It's as if the SLOG device is not active. Is there an update to the plugin that can solve this issue?

Link to comment

@Joly0 To do what I did for the GitHub ticket, I set up another pool on a spare NVME drive (figuring a pool would be more unraid workable than an unassigned disk).  And noting it's a single NVME drive with no other drives in the pool, not even a parity.  

 

I pointed docker directly at that new pool - but for some reason I today find out that it put a lot of core docker files on my USB boot drive.  I even double checked and it was definately still configured to be pointed at the NVME drive.  I found this out because I thought it was a hangover from something else and deleted the usb drive files (which were just in an app data folder).

 

I honestly don't know what is wrong with unraid here.  Perhaps it's always had these files on the usb, but I've never noticed them before.

 

I'll recreate on an unassigned disk and try again.  After a day or so though, otherwise docker is working if the docker image is not placed on a ZFS drive as per your experiment.

Link to comment

Hallo everyone,

 

firstly thanks to @steini84 for this Plugin, I'm very interested in it. Hope to get it running the way I want.

 

I'm running a relative new unraid Server in my house.

  • AMD Ryzen9 3900X
  • ASRock X570 Taichi
  • Kingston 32GB ECC Ram
  • 2x Seagate Exos X16 16TB (unraid array)
  • 2x WD SN750 1TB NVMe (unraid cache pool)
  • Dell Perc HBA
  • Silverstone CS380 Case (8x Hotswap Bays directly connected to the HBA Card)
  • 2x additional 5,25 to 3,5 Hotswap frames for another two drives (directly connected to the Mainboard)

Everything runs just fine at the moment. Learning much on unraid and setting up the machine. Planing started nearly two years ago and first thing this year getting/installing the new hardware.

My use case for the ZFS Pool should be something like this:

 

Dockers are directed to the NVMe Cache and in my opinion should stay there (I'm running a Plex Media Server on it and like the speeds of the NVMes while caching media and metadata to the clients in my house). So my plan was that Dockers should stay there.

Yesterday I updated my machine and installed two used WD40EFRX drives. I installed them in the additional frames which are connected directly to the mainbaord (everything fine, running preclear at the moment). Planned that all other drives of the unraid array (while I'm updating and getting more drives getting connected to the HBA card). So the two 4TB are separated.

 

I want to set up a nextcloud docker on my System for all of my personal media (atm: ~ 1TB photos ; many sensitive documents and pdfs (plan is to digitialize all of my paper documents and store them); sync of two smartphones to nextcloud etc.)

So my szenario would be something like:

Installing nextcloud docker and setting it up, but storing all the data of my peronal cloud to a ZFS pool existing of the mentioned two 4TB drives (would also create subfilesystems as mentioned in the first post).

  1. Is it possible to do so ?
  2. After succesfully preaclearing the drives, they have no filesystem. Do I have to format them with the unassigned devices Plugin or just start creating a ZFS pool through the terminal?
  3. If my scenario of setting up is okay, is it still possible to use compression methods of the ZFS system? I mean i want to browse my files on multiple devices (through the nextcloud App). This is absolutely new to me (I only know using Winrar at a normal Windows PC, and e.g. if I RAR some photos i can't browse them without extracting the files before [I don't think that the ZFS compression is working that way?!])
  4. (Only far far away plans: If this all should work the way I want, the possibility of setting up a ZFS one-disk-drive pool (e.g. external USB backup) is just to use the advantages of the ZFS filesystem? So the internal disk pool and the backup disk can ideally communicate)

Sorry for my rusty english. Many years ago writing an essay complete in english.

Greets from Germany,

dan

Link to comment

@steini84 @ich777 I just tested the problem with the latest zfs version (2.0.2) and the problem still perstists, but as i already pointed out, the problem only occurs, when the docker.img file is on the zfs-volume. If its on a default unraid xfs or btrfs array, everything works perfectly fine.

 

So in my opinion, you could make 2.0.2 a stable release with the caution to look for the storage path of the docker.img and the suggestion to put that on another array other then the zfs one.

  • Like 1
  • Thanks 1
Link to comment

I'm having an issue where smbd panics and dumps core on my 6.9.0-rc2 Unraid machine when I try to mount/interact with a share from a ZFS pool. The shares from my array appear to behave correctly (at least until a computer touches one of the ZFS shares). I use one of my ZFS shares for TimeMachine backups, so it is accessed regularly, but my backups have failed since installing 6.9.0-rc1.

 

Feb  3 08:41:57 TaylorPlex sshd[21250]: Postponed keyboard-interactive/pam for root from fe80::40f:f3c:e974:1593%br0 port 60150 ssh2 [preauth]

Feb  3 08:41:57 TaylorPlex sshd[21250]: Accepted keyboard-interactive/pam for root from fe80::40f:f3c:e974:1593%br0 port 60150 ssh2

Feb  3 08:41:57 TaylorPlex sshd[21250]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)

Feb  3 08:41:57 TaylorPlex sshd[21250]: Starting session: shell on pts/1 for root from fe80::40f:f3c:e974:1593%br0 port 60150 id 0

Feb  3 08:42:04 TaylorPlex smbd[21286]: [2021/02/03 08:42:04.800103,  0] ../../source3/lib/popt_common.c:68(popt_s3_talloc_log_fn)

Feb  3 08:42:04 TaylorPlex smbd[21286]:   Bad talloc magic value - unknown value

Feb  3 08:42:04 TaylorPlex smbd[21286]: [2021/02/03 08:42:04.800171,  0] ../../source3/lib/util.c:829(smb_panic_s3)

Feb  3 08:42:04 TaylorPlex smbd[21286]:   PANIC (pid 21286): Bad talloc magic value - unknown value

Feb  3 08:42:04 TaylorPlex smbd[21286]: [2021/02/03 08:42:04.800267,  0] ../../lib/util/fault.c:222(log_stack_trace)

Feb  3 08:42:04 TaylorPlex smbd[21286]:   BACKTRACE:

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #0 log_stack_trace + 0x39 [ip=0x1485eed71139] [sp=0x7ffd6205f180]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #1 smb_panic_s3 + 0x23 [ip=0x1485eed20ee3] [sp=0x7ffd6205fac0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #2 smb_panic + 0x2f [ip=0x1485eed7134f] [sp=0x7ffd6205fae0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #3 <unknown symbol> [ip=0x1485ee40d497] [sp=0x7ffd6205fbf0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #4 get_share_mode_lock + 0x32d [ip=0x1485eeff48dd] [sp=0x7ffd6205fc20]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #5 smbd_contend_level2_oplocks_begin + 0xd1 [ip=0x1485eef67741] [sp=0x7ffd6205fc80]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #6 brl_lock + 0x563 [ip=0x1485eefeccd3] [sp=0x7ffd6205fd50]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #7 dcesrv_setup_ncacn_ip_tcp_sockets + 0x2fe [ip=0x1485eefe91ee] [sp=0x7ffd6205fe20]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #8 release_posix_lock_posix_flavour + 0x1cd7 [ip=0x1485eeff2ca7] [sp=0x7ffd6205feb0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #9 db_open + 0xbae [ip=0x1485eed1740e] [sp=0x7ffd6205fee0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #10 db_open_rbt + 0x7dd [ip=0x1485ee055a9d] [sp=0x7ffd6205ff70]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #11 dbwrap_do_locked + 0x5d [ip=0x1485ee05331d] [sp=0x7ffd62060020]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #12 db_open + 0x67e [ip=0x1485eed16ede] [sp=0x7ffd62060070]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #13 dbwrap_do_locked + 0x5d [ip=0x1485ee05331d] [sp=0x7ffd620600f0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #14 share_mode_do_locked + 0xe2 [ip=0x1485eeff4ce2] [sp=0x7ffd62060140]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #15 do_lock + 0x128 [ip=0x1485eefe9e88] [sp=0x7ffd62060190]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #16 <unknown symbol> [ip=0x1485e993a9bc] [sp=0x7ffd62060280]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #17 smbd_smb2_request_process_create + 0xb15 [ip=0x1485eef44e85] [sp=0x7ffd62060390]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #18 smbd_smb2_request_dispatch + 0xd3e [ip=0x1485eef3c4ae] [sp=0x7ffd620604f0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #19 smbd_smb2_request_dispatch_immediate + 0x730 [ip=0x1485eef3d1c0] [sp=0x7ffd62060580]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #20 tevent_common_invoke_fd_handler + 0x7d [ip=0x1485ee3ca70d] [sp=0x7ffd620605f0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #21 tevent_wakeup_recv + 0x1097 [ip=0x1485ee3d0a77] [sp=0x7ffd62060620]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #22 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x1485ee3cec07] [sp=0x7ffd62060680]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #23 _tevent_loop_once + 0x94 [ip=0x1485ee3c9df4] [sp=0x7ffd620606a0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #24 tevent_common_loop_wait + 0x1b [ip=0x1485ee3ca09b] [sp=0x7ffd620606d0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #25 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x1485ee3ceba7] [sp=0x7ffd620606f0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #26 smbd_process + 0x7a7 [ip=0x1485eef2c5d7] [sp=0x7ffd62060710]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #27 start_mdssd + 0x2791 [ip=0x56242a18f621] [sp=0x7ffd620607a0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #28 tevent_common_invoke_fd_handler + 0x7d [ip=0x1485ee3ca70d] [sp=0x7ffd62060870]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #29 tevent_wakeup_recv + 0x1097 [ip=0x1485ee3d0a77] [sp=0x7ffd620608a0]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #30 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x1485ee3cec07] [sp=0x7ffd62060900]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #31 _tevent_loop_once + 0x94 [ip=0x1485ee3c9df4] [sp=0x7ffd62060920]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #32 tevent_common_loop_wait + 0x1b [ip=0x1485ee3ca09b] [sp=0x7ffd62060950]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #33 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x1485ee3ceba7] [sp=0x7ffd62060970]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #34 main + 0x1b2f [ip=0x56242a189c1f] [sp=0x7ffd62060990]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #35 __libc_start_main + 0xeb [ip=0x1485ee201e6b] [sp=0x7ffd62060d40]

Feb  3 08:42:04 TaylorPlex smbd[21286]:    #36 _start + 0x2a [ip=0x56242a189ffa] [sp=0x7ffd62060e00]

Feb  3 08:42:04 TaylorPlex smbd[21286]: [2021/02/03 08:42:04.819304,  0] ../../source3/lib/dumpcore.c:315(dump_core)

Feb  3 08:42:04 TaylorPlex smbd[21286]:   dumping core in /var/log/samba/cores/smbd

Feb  3 08:42:04 TaylorPlex smbd[21286]:

Feb  3 08:42:14 TaylorPlex smbd[21541]: [2021/02/03 08:42:14.825538,  0] ../../source3/lib/popt_common.c:68(popt_s3_talloc_log_fn)

Feb  3 08:42:14 TaylorPlex smbd[21541]:   Bad talloc magic value - unknown value

Feb  3 08:42:14 TaylorPlex smbd[21541]: [2021/02/03 08:42:14.825605,  0] ../../source3/lib/util.c:829(smb_panic_s3)

Feb  3 08:42:14 TaylorPlex smbd[21541]:   PANIC (pid 21541): Bad talloc magic value - unknown value

Feb  3 08:42:14 TaylorPlex smbd[21541]: [2021/02/03 08:42:14.825725,  0] ../../lib/util/fault.c:222(log_stack_trace)

Feb  3 08:42:14 TaylorPlex smbd[21541]:   BACKTRACE:

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #0 log_stack_trace + 0x39 [ip=0x1485eed71139] [sp=0x7ffd6205f180]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #1 smb_panic_s3 + 0x23 [ip=0x1485eed20ee3] [sp=0x7ffd6205fac0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #2 smb_panic + 0x2f [ip=0x1485eed7134f] [sp=0x7ffd6205fae0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #3 <unknown symbol> [ip=0x1485ee40d497] [sp=0x7ffd6205fbf0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #4 get_share_mode_lock + 0x32d [ip=0x1485eeff48dd] [sp=0x7ffd6205fc20]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #5 smbd_contend_level2_oplocks_begin + 0xd1 [ip=0x1485eef67741] [sp=0x7ffd6205fc80]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #6 brl_lock + 0x563 [ip=0x1485eefeccd3] [sp=0x7ffd6205fd50]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #7 dcesrv_setup_ncacn_ip_tcp_sockets + 0x2fe [ip=0x1485eefe91ee] [sp=0x7ffd6205fe20]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #8 release_posix_lock_posix_flavour + 0x1cd7 [ip=0x1485eeff2ca7] [sp=0x7ffd6205feb0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #9 db_open + 0xbae [ip=0x1485eed1740e] [sp=0x7ffd6205fee0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #10 db_open_rbt + 0x7dd [ip=0x1485ee055a9d] [sp=0x7ffd6205ff70]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #11 dbwrap_do_locked + 0x5d [ip=0x1485ee05331d] [sp=0x7ffd62060020]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #12 db_open + 0x67e [ip=0x1485eed16ede] [sp=0x7ffd62060070]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #13 dbwrap_do_locked + 0x5d [ip=0x1485ee05331d] [sp=0x7ffd620600f0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #14 share_mode_do_locked + 0xe2 [ip=0x1485eeff4ce2] [sp=0x7ffd62060140]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #15 do_lock + 0x128 [ip=0x1485eefe9e88] [sp=0x7ffd62060190]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #16 <unknown symbol> [ip=0x1485e993a9bc] [sp=0x7ffd62060280]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #17 smbd_smb2_request_process_create + 0xb15 [ip=0x1485eef44e85] [sp=0x7ffd62060390]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #18 smbd_smb2_request_dispatch + 0xd3e [ip=0x1485eef3c4ae] [sp=0x7ffd620604f0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #19 smbd_smb2_request_dispatch_immediate + 0x730 [ip=0x1485eef3d1c0] [sp=0x7ffd62060580]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #20 tevent_common_invoke_fd_handler + 0x7d [ip=0x1485ee3ca70d] [sp=0x7ffd620605f0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #21 tevent_wakeup_recv + 0x1097 [ip=0x1485ee3d0a77] [sp=0x7ffd62060620]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #22 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x1485ee3cec07] [sp=0x7ffd62060680]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #23 _tevent_loop_once + 0x94 [ip=0x1485ee3c9df4] [sp=0x7ffd620606a0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #24 tevent_common_loop_wait + 0x1b [ip=0x1485ee3ca09b] [sp=0x7ffd620606d0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #25 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x1485ee3ceba7] [sp=0x7ffd620606f0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #26 smbd_process + 0x7a7 [ip=0x1485eef2c5d7] [sp=0x7ffd62060710]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #27 start_mdssd + 0x2791 [ip=0x56242a18f621] [sp=0x7ffd620607a0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #28 tevent_common_invoke_fd_handler + 0x7d [ip=0x1485ee3ca70d] [sp=0x7ffd62060870]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #29 tevent_wakeup_recv + 0x1097 [ip=0x1485ee3d0a77] [sp=0x7ffd620608a0]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #30 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x1485ee3cec07] [sp=0x7ffd62060900]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #31 _tevent_loop_once + 0x94 [ip=0x1485ee3c9df4] [sp=0x7ffd62060920]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #32 tevent_common_loop_wait + 0x1b [ip=0x1485ee3ca09b] [sp=0x7ffd62060950]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #33 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x1485ee3ceba7] [sp=0x7ffd62060970]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #34 main + 0x1b2f [ip=0x56242a189c1f] [sp=0x7ffd62060990]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #35 __libc_start_main + 0xeb [ip=0x1485ee201e6b] [sp=0x7ffd62060d40]

Feb  3 08:42:14 TaylorPlex smbd[21541]:    #36 _start + 0x2a [ip=0x56242a189ffa] [sp=0x7ffd62060e00]

Feb  3 08:42:14 TaylorPlex smbd[21541]: [2021/02/03 08:42:14.846140,  0] ../../source3/lib/dumpcore.c:315(dump_core)

Feb  3 08:42:14 TaylorPlex smbd[21541]:   dumping core in /var/log/samba/cores/smbd

Link to comment
On 2/2/2021 at 9:47 PM, ich777 said:

What are core docker files? What do you mean exactly?

Good question - I should be more specific.  I just found that there were files that I 'assume' are core to docker functioning on the boot drive.  My bad for assuming, but I deleted them because I assumed they should not be there given I point docker at a different drive.  That deletion removed my docker config and it was recreated, which I 'assume again sorry' is back in the boot drive.  I should check that.

 

So unless it's obvious to you, I assume docker requires some files in /boot somewhere additional to those specified in the docker config GUI page which I did double check was correctly configured to NOT point at the boot drive.  It seemed logical that since my configuration was pointed elsewhere and deleting these files broke docker config, that these were core to docker (on unraid) functioning.

Link to comment

Nothing *docker core* should ever wind up on the flash drive.  (The exception is the templates within /config/plugins/dockerMan, but that's Unraid specific, not docker per se)

 

But, the storage driver that docker requires for utilizing the "image" as a directory / folder on a ZFS device is not within Unraid.  Should be no issues what so ever with having it as a BTRFS / XFS image on a ZFS device.  Check Settings - Docker (with the service stopped) to confirm.

Link to comment

@squid 

 

For those whom want the quick summary, this thread ends with, "There's different drivers for docker to be on different filesystems.  AFAIK, there's only the 2 included in the OS -> btrfs and xfs.  You could probably do a feature req to have the driver for zfs included for docker"

Edited by Marshalleq
Link to comment

@Squid Is there a particular developer you could recommend that knows the ins and outs of docker that might be able to steer us on this issue?  It would be great to have it sorted out so that @limetech doesn't have this as a blocker for any future zfs implementation.  We are all looking forward to that and it would be disappointing to have another reason to delay our chances. 😡

Edited by Marshalleq
  • Like 1
Link to comment

ZFS v2.0.3 built for Unraid 6.9.0-rc2 (Kernel v5.10.1)

It is in the main folder so be adviced if there are still problems with having docker.img on zfs: 

On 2/2/2021 at 10:50 AM, Joly0 said:

look for the storage path of the docker.img and the suggestion to put that on another array other then the zfs one.

 

Link to comment

For clarity, I do believe someone posted previously that 2.0.3 still exhibits this issue and the workaround is to ensure your docker.img file is not placed on a ZFS partition.  I'm stuck on 2.0.1 because I no longer have a filesystem other than ZFS (and don't want one).

 

We seem to be at a deadlock where we require some help from someone knowledgeable about the inner workings of docker on unraid.

 

@ich777 Perhaps with your kernel helper there would be a way to build ZFS support into docker as a workaround as per Squid's comment below?  Potentially we could then get rid of the docker.img entirely, which seems to be some issue with running loop device on ZFS > 2.0.1.

 

@joly0 Just thinking about your report that the problem process was loopback - I wonder if we could test a number of different loopback devices to see if there's anything in that.

 

On 12/31/2020 at 12:39 PM, Squid said:

There's different drivers for docker to be on different filesystems.  AFAIK, there's only the 2 included in the OS -> btrfs and xfs.  You could probably do a feature req to have the driver for zfs included for docker

 

Your other option is either a btrfs or xfs image stored on the zfs pool.

Link to comment
On 1/20/2021 at 11:26 AM, Joly0 said:

although my lancache doesnt work due to some syscalls (afaik)

 

Lack of sendfile syscall support - I think lancache-bundle on ZFS stopped working when I moved from 6.8.3 to 6.9-rc2.  I posted an awful-but-functional workaround script here:

 

 

Just checked and I seem to be on zfs-2.0.0-1 - this thread makes me nervous as I have everything docker related running on zfs.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.