Community Developer
  • Content Count

  • Joined

  • Last visited

Everything posted by steini84

  1. The problem is with using a newer version of zfs than 2.0.0. I tried saving the img file on ZFS (2.0.4) and both btrfs and xfs locked up the system. Something to do with the loopback mount. Too bad I could not use the folder mapping and I always got an error that Docker service could not be started. I moved the docker.img to my cache drive and I don’t care too much since it’s disposable. I think I will make latest zfs the default for the next upgrade to unRaid and add a disclaimer that you cannot save the docker.img on ZFS if the problem will still be present. Sent from my iPhone using Ta
  2. Just set up automatic snapshots using something like Sanoid on the running vms. Of course it’s always better to turn off the VM if you are doing something like a migration. Sent from my iPhone using Tapatalk
  3. I was trying this on my server, but the Docker service would not start when I pointed the Docker data-root to a folder on ZFS. Worked fine using folders on Btrfs/Xfs formatted drives.
  4. Builds for 6.9.2 have been added (2.0.0 and 2.0.4 if you have enabled "unstable" builds) Thanks to @ich777 the process is now automated! When a new unRAID version is released ZFS is built and uploaded automatically. Thanks a lot to @ich777 for this awesome addition!
  5. Check this out. I love it for snapshots and replication
  6. There are packages going back to 6.1.2 so 6.8.1 is definitely supported. Are you maybe on an old beta or RC? In any case I would just update to 6.9 just for fun I installed 6.8.1 and it worked
  7. Awesome, great to hear Here are the relevant parts from the docker setup, but take note that I have not updated to check_mk 2.0 - a good weekend project for me checkmk/check-mk-raw:1.6.0-latest http://[IP]:[PORT:5000]/cmk/check_mk/ --ulimit nofile=1024 --tmpfs /opt/omd/sites/cmk/tmp:uid=1000,gid=1000
  8. Added a build of ZFS 2.0.4 for 6.9.1 for those who are using unstable
  9. You won't get spindown with zfs. It's a striped filesystem and you have to read/write from all the disks at once Sent from my iPhone using Tapatalk
  10. Yes it's still the same Sent from my iPhone using Tapatalk
  11. Added ZFS 2.0.0 for unRAID 6.9.1. Also 2.0.3 in the "Unstable" folder you can manually enable
  12. Did you try the new bind-mount to a directory that was added? In any case I moved 2.0.3 to unstable and added a 2.0.0. build instead (thanks ich777) You can move to 2.0.0. by deleting the 2.0.3 package and reboot: rm /boot/config/plugins/unRAID6-ZFS/packages/*
  13. Added ZFS v2.0.0 build for 6.9.0 stable (2.0.3 in the unstable folder) Added ZFS v2.0.3 build for 6.9.0 stable This hopefully fixes the problems with docker.img and zfs 2.0.1+ : ".. we added the ability to bind-mount a directory instead of using a loopback. If file name does not end with .img then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker. For example, if /mnt/user/system/docker/docker then we first create, if necessary the directory /mnt/user/system/docker/docker. If this path is on a user share we then "de-ref
  14. Done. You can still use it by enabling unstable builds Sent from my iPhone using Tapatalk
  15. ZFS v2.0.3 built for Unraid 6.9.0-rc2 (Kernel v5.10.1) It is in the main folder so be adviced if there are still problems with having docker.img on zfs:
  16. I have uploaded a zfs 2.0.2 build for 6.9.0-rc2 but since there have been some reports of errors with the 2.0.1 builds I put it in the unstable folder. @Marshalleq maybe you can let us know if the problem has been fixed in 2.0.2?
  17. Yeah this should do the trick: znapzend --debug --logto=/var/log/znapzend.log --daemonize I also want to let you know that I was using znapsend and ran in to some problems, for example after changing datasets. I moved over to Sanoid/Syncoid and have not had a single problem since. Not saying that one solution is better than the other, but that was my experience and I would reccomend that you check it out
  18. I have moved the 2.0.1 builds to the unstable folder for now. #Enable unstable builds touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS rm /boot/config/plugins/unRAID6-ZFS/packages/* #Then reboot #Disable unstable builds rm /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS rm /boot/config/plugins/unRAID6-ZFS/packages/* #Then reboot Please let us know if you are running 2.0.1 without issues and better yet if the conflict reported by Marshalleq has been identified / resolved
  19. You can use smb-extras.conf[emoji108] Sent from my iPhone using Tapatalk
  20. I tried on both 6.9.0-rc2 & 6.8.3 and get the right version after a reboot. I have only ran it on my test server and have not seen anything strange. But to be honest I only run zfs on that install and nothing else going on there. Sent from my iPhone using Tapatalk
  21. Built zfs-2.0.1 for 6.8.3 & 6.9.0-rc2 If you want to update zfs versions you have to remove the old cached version rm /boot/config/plugins/unRAID6-ZFS/packages/* And then restart the server Sent from my iPhone using Tapatalk
  22. Sorry guys, did not see that update. Will build now Sent from my iPhone using Tapatalk
  23. Im still on 6.8.3 but will see whats going on on my build server tomorrow Sent from my iPhone using Tapatalk
  24. No I just wish all my standalone dockers and vms could stay up even though the array was stopped. But to be fair I really seldomly have to mess with the array. But I don't think this is really a problem though. I have had this running over 5 years now without a single hickup and 0 bytes in data loss. And with the daily zfs replication I had 3 months of daily backups I could go back to if I needed any old configs etc. My point being this started as a way for me to make the perfect setup for me and I'm happy that it has helped others. Funny to look back at the beginning and seeing the first