Community Developer
  • Posts

  • Joined

  • Last visited



  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

steini84's Achievements


Enthusiast (6/14)

  • Superstar Rare

Recent Badges



  1. Just to put it out there. I have personally moved to sanoid/syncoid Sent from my iPhone using Tapatalk
  2. You can get away with a oneliner in the go file / User Scripts: wget -P /usr/local/sbin/ "" && chmod +x /usr/local/sbin/ioztat But packing this up as a plugin should be a fun project
  3. I want to keep the zfs package as vanilla as possible. It would be a great fit for a plugin Sent from my iPhone using Tapatalk
  4. This is just an unfortunate coincidence, i would guess it´s cabling issues or a dying drive. Have you done a smart test?
  5. You can also keep it under /mnt/cache but don't have it on the flash! Sent from my iPhone using Tapatalk
  6. This not the first time that I've begin to lost control of my unraid server, but until now I didn't lost anything. All I can say, it's begun to appear with the version 2.0 of the plugin, but maybe that it's linked to something else? Are you keeping your docker.img on the zfs array? It can cause the lockup you are describing. Sent from my iPhone using Tapatalk
  7. What I think would make most sense is to have the zfs plug-in as vanilla as possible and extra functionality to a companion plugin. Then the companion plug-in would continue to work when zfs becomes native [emoji41] Sent from my iPhone using Tapatalk
  8. I use check_mk for monitoring and that works perfectly for everything from low space to high load and even failed pools. It takes some time to set up but is really good! A simple bash script can be rigged to check zpool status -x I can set one up when I'm in front of a computer Sent from my iPhone using Tapatalk
  9. You are probably storing docker.img on the zfs pool and running the latest RC of unraid: You can also try to use folders instead of docker.img Sent from my iPhone using Tapatalk
  10. You have truly taken this plugin to the next level and with the automatic builds it´s as good as it gets until we get native ZFS on Unraid!
  11. You can do a pretty basic test using dd dd if=/dev/random of=/mnt/SSD/test.bin bs=1MB count=1024 If you want to test the read speed you change input and output *remember to change to a path within your zfs pool Sent from my iPhone using Tapatalk
  12. Just run this command then reboot rm /boot/config/plugins/unRAID6-ZFS/packages/*
  13. This should bring some attention to ZFS on unRAID. Fun to see the plugin being used here
  14. The thing is that we are not making any changes just shipping ZFS 2.1 by default. We have shipped 2.0 by default until now because of this deadlock problem, and 2.1 if you enabled "unstable builds" (see the first post). ZFS 2.0 only supports kernels 3.10 - 5.10, but unRAID 6.10 will ship with (at least) kernel 5.13.8 therefore we have to upgrade to ZFS 2.1 So if you are running ZFS 2.1 now on 6.9.2 or 6.10.0-rc1 there wont be any changes: You can check what version is running i two ways: root@Tower:~# dmesg | grep -i zfs [ 69.055201] ZFS: Loaded module v2.1.0-1, ZFS pool version 5000, ZFS filesystem version 5 [ 70.692637] WARNING: ignoring tunable zfs_arc_max (using 2067378176 instead) root@Tower:~# cat /sys/module/zfs/version 2.1.0-1
  15. Great video: Sent from my iPhone using Tapatalk