Jump to content

steini84

Community Developer
  • Posts

    435
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by steini84

  1. The update is available now. Here is a rough guide on how to compile it your self: First off download this awesome script from gfjardim wget https://gist.githubusercontent.com/gfjardim/c18d782c3e9aa30837ff/raw/224264b305a56f85f08112a4ca16e3d59d45d6be/build.sh Change this line from: LINK="https://www.kernel.org/pub/linux/kernel/v3.x/linux-${KERNEL}.tar.xz" to LINK="https://www.kernel.org/pub/linux/kernel/v4.x/linux-${KERNEL}.tar.xz using: nano build.sh then make it executable chmod +x build.sh run it with ./build.sh answer 1, 2 & 3 with Y answer 3.1, 3.2 with N answer 3.3 with Y answer 4 and 6 with N then make the modules that are needed: cd kernel make modules Then we need to build some dependencies #Libuuid wget "http://downloads.sourceforge.net/project/libuuid/libuuid-1.0.3.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibuuid%2F&ts=1453068148&use_mirror=skylink" tar -xvf libuuid* cd libuuid-1.0.3 ./configure make make install #Zlib wget http://zlib.net/zlib-1.2.8.tar.gz tar -xvf zlib-1.2.8.tar.gz cd zlib-1.2.8 ./configure make make install Then we build zfs and spl First download the latest zfs and spl from zfsonlinux.org wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.5.4.tar.gz tar -xvf spl-0.6.5.4.tar.gz cd spl-0.6.5.4 ./configure --prefix=/usr make make install DESTDIR=$(pwd)/PACKAGE cd $(pwd)/PACKAGE makepkg -l y -c n ../spl.tgz installpkg ../spl.tgz load the module depmod modprobe spl Same for zfs wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.5.4.tar.gz tar -xvf zfs-0.6.5.4.tar.gz cd zfs-0.6.5.4 ./configure --prefix=/usr make make install DESTDIR=$(pwd)/PACKAGE cd $(pwd)/PACKAGE makepkg -l y -c n ../zfs.tgz installpkg ../zfs.tgz depmod modprobe zfs
  2. Yeah will do tonight. Have to compile for unraid 6.1.7 so I'll write down the steps
  3. Thanks man. Will have to implement that
  4. Plugin updated for version 6.1.4 Packages for 6.1.2 & 6.1.3 can be found at https://github.com/Steini1984/unRAID6-ZFS/tree/master/packages
  5. No you would probably not see any difference in performance on a single SSD, but I use ZFS on my single SSD laptop for snapshots, check sums and easy backups with zfs send/receive. It was a dealbreaker for me since I want my VMs to stay on a multiple SSD drive array with redundancy and data integrity. ZFS is what I know and what I trust so that was my first choice. I have gotten used to automatic snapshots, clones, compression and other goodies that ZFS has so a hardware raid was not a option. I am aware that Btrfs has all those options and it built into unRaid so I decided to give that a try. Learning the btrfs way of doing these things were fun, but after a couple of days the performance got horrible. My server got a really high (50+) load while writing big files and all the pain with re balancing, scrubbing and the "art" of knowing how much free space you have made me rethink things. I wiped everything and started all over with btrfs but after maybe a week later the performance and my mood had gone down again. I realize that It was probably definitely something I did that caused this bad performance and with enough debugging and massaging the setup I could get it where I wanted... but knowing ZFS had beeen rock solid for years and really easy for me to administer I came to the conclusion that building a plugin would be less work than making my btrfs setup work. No hate against btrfs but ZFS suited me better and I decided to post this plugin if it would be helpful to others
  6. Good to hear man If you want to learn some ZFS (and become a fanboy like me I reccomend listening to Techsnap http://www.jupiterbroadcasting.com/show/techsnap/ especially the feedback section where there is always some ZFS gold. But I forgot that you can easily add swap with zfs under unRAID (code from the Arch wiki): #first create a 8gb zvol where <pool> is the name of your pool: zfs create -V 8G -b $(getconf PAGESIZE) \ -o primarycache=metadata \ -o com.sun:auto-snapshot=false <pool>/swap #then make it a swap partition mkswap -f /dev/zvol/<pool>/swap swapon /dev/zvol/<pool>/swap #to make it persistent you need to add this to your go file: swapon /dev/zvol/<pool>/swap
  7. The plugin automatically mounts all pools (zpool import -a ) and mount points are saved in the pool itself, no need for fstab entries or manual mounting through the go file. ZFS should export the pool on shutdown, but if you want you could try adding this to the stop file zpool export <poolname> Just remember that everyone is new to everything at first I started with zfs on Freebsd and this page is a goldmine https://www.freebsd.org/doc/handbook/zfs.html - you just have to look past the Freebsd parts, but all the ZFS commands and concepts are the same The Arch wiki is always great: https://wiki.archlinux.org/index.php/ZFS Then there are some good Youtube videos that give you a crash course on ZFS and these are good if I remember correctly: https://www.youtube.com/watch?v=R9EMgi_XOoo https://www.youtube.com/watch?v=tPsV_8k-aVU https://www.youtube.com/watch?v=jDLJJ2-ZTq8
  8. Yeah I can see that 6.1.3 is already out with kernel 4.1.7 I will build a new version and post later tonight. Then I will have to figure out how the plugin can install different packages based on the version of unRAID
  9. That is strange. Try running depmod first them modprobe zfs.
  10. ****************************************************** This plugin is depricated since Unraid 6.12 has native ZFS! Since this thread was written I have moved my snapshots/bakups/replication over to Sanoid/Syncoid which I like even more, but will keep the original thread unchanged since ZnapZend is still a valid option: ****************************************************** What is this? This plugin is a build of ZFS on Linux for unRAID 6 Installation of the plugin To install you copy the URL below into the install plugin page in your unRAID 6 web gui or install through the Community Applications. https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg WHY ZFS and unRAID? I wanted to put down a little explanation and a mini "guide" that explains how and why I use ZFS with unRAID * I use sdx, sdy & sdz as example devices but you have to change that to your device names. * SSD is just the name I like to use for my pool, but you use what you like My use-case for ZFS is a really simple, but a powerful way to make unRAID the perfect setup for me. In the past I have ran ESXI with unRAID as a guest with pci pass-through and Omnios+napp-it as a data store. Then I tried Proxmox that had native ZFS and unRAID again as a guest, but both of these solutions were a little bit fragile. When unRAID started to have great support for Vms and Docker I wanted to have that as the host system and stop relying on a hypervisor. The only thing missing was ZFS and even though I gave btrfs a good chance it did not feel right for me. I built ZFS for unraid in 2015 and as of March 2023 the original setup of 3x SSD + 2x HDD is still going strong running 24/7. That means 7 years of rock solid and problem free up time. You might think a ZFS fanboy like my self would I like to use Freenas or other ZFS based solution, but I really like unRAID for it´s flexible ability to mix and match hard drives for media files. I use ZFS to compliment unRAID and think I get the best of both worlds with this setup. I run a 3 SSD disk pool in raidz that i use for Docker and Vms. I run automatic snapshots every 15 minutes and replicate every day to a 2x2TB mirror that connects over USB as a backup. I also use that backup pool to rsync my most valuable data to from unRAID (photos etc) which has the added bonus of being protected witch checksums (no bit rot). I know btrfs can probably solve all of this, but I decided to go with ZFS. The great thing about open source is that you have the choice to choose. Disclamer/Limitations The plugin needs to be rebuilt when a update includes a new Linux Kernel (there is a automated system that makes new builds so there should not be a long delay - thanks Ich777) This plugin does not allow you to use ZFS as a part of the array or a cache pool (which would be awesome by the way). This is not supported by Limetech. I cant take any responsibility for your data, but it should be fine as it's just the official ZFS on Linux packages built on unRAID(thanks to gfjardim for making that awesome script to setup the build environment). The plugin installs the packages, loads the kernel module and imports all pools. How to create a pool? First create a zfs pool and mount it somewhere under /mnt Examples: Single disk pool zpool create -m /mnt/SSD SSD sdx 2 disk mirror zpool create -m /mnt/SSD SSD mirror sdx sdy 3 disk raidz pool zpool create -m /mnt/SSD SSD raidz sdx sdy sdz Tweaks After creating the pool I like to make some adjustments. They are not needed, but give my server better performance: My pool is all SSD so I want to enable trim zpool set autotrim=on SSD Next I add these lines to my go file to limit the ARC memory usage of ZFS (i like to limit it to 8GB on my 32GB box, but you can adjust that to your needs) echo "#Adjusting ARC memory usage (limit 8GB)" >> /boot/config/go echo "echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max" >> /boot/config/go I also like to enable compression. "This may sound counter-intuitive, but turning on ZFS compression not only saves space, but also improves performance. This is because the time it takes to compress and decompress the data is quicker than then time it takes to read and write the uncompressed data to disk (at least on newer laptops with multi-core chips)." -Oracle to enable compression you need to write this command: (only applies to block written after enabling compression) zfs set compression=lz4 SSD and lastly I like to disable access time zfs set atime=off SSD File systems Now we could just use one file system (/mnt/SSD/), but I like to make separate file systems for Docker and Vms zfs create SSD/Vms zfs create SSD/Docker Now we should have something like this: root@Tower:~# zfs list NAME USED AVAIL REFER MOUNTPOINT SSD 170K 832M 24K /mnt/SSD SSD/Docker 24K 832M 24K /mnt/SSD/Docker SSD/Vms 24K 832M 24K /mnt/SSD/Vms Now we have Dockers and Vms separated and that gives us more flexibility. For example can we have different ZFS features turned on for the file systems and we can snapshot, restore and replicate them separately. To have even more flexibility I like to create a separate file system for every Vm and every Docker container. By that I can work with a single vm or a single container without interfering with the rest. In other words, I can mess up a single docker, and rollback, without affecting the rest of the server. Lets start with a single Ubuntu Vm and a Home Assistant container. While we are at it lets create a file system for libvirt.img *The trick is to add the file system before you create a vm/container in unRAID, but with some moving around you can copy the data directory from an existing docker in to a zfs file system after the fact. Now we have this structure and each and everyone of these file systems can be worked with as a group, subgroup or individually (snapshots, clones, replications, rollbacks etc): root@Tower:~# zfs list NAME USED AVAIL REFER MOUNTPOINT SSD 309K 832M 24K /mnt/SSD SSD/Docker 48K 832M 24K /mnt/SSD/Docker SSD/Docker/HomeAssistant 24K 832M 24K /mnt/SSD/Docker/HomeAssistant SSD/Vms 72K 832M 24K /mnt/SSD/Vms SSD/Vms/Ubuntu 24K 832M 24K /mnt/SSD/Vms/Ubuntu SSD/Vms/libvirt 24K 832M 24K /mnt/SSD/Vms/libvirt unRAID settings From here you can navigate to the unRAID webgui and set the default folders for Dockers and Vms to /mnt/SSD/Docker and /mnt/SSD/Vms: **** There have been reported issues with keeping docker.img on ZFS 2.1 (will be default on unRAID 6.10.0). The system can lock up so I reccomend you keep docker.img on the cache drive if you run into any troubles***** Now when you add and new app via docker you choose the newly created folder as the config directory Same with the Vms Snapshots and rollbacks Now this is where the magic happens. You can snapshot the whole pool or you can snapshot a subset. Lets try to snapshot the whole thing, then just Docker (and it's child file systems) then one snapshot just for the Ubuntu Vm oot@Tower:/mnt/SSD/Vms/Ubuntu# zfs list -t snapshot no datasets available root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD@everything root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD/Docker@just_docker root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD/Vms/Ubuntu@ubuntu_snapshot root@Tower:/mnt/SSD/Vms/Ubuntu# zfs list -r -t snapshot NAME USED AVAIL REFER MOUNTPOINT SSD@everything 0B - 24K - SSD/Docker@everything 0B - 24K - SSD/Docker@just_docker 0B - 24K - SSD/Docker/HomeAssistant@everything 0B - 24K - SSD/Docker/HomeAssistant@just_docker 0B - 24K - SSD/Vms@everything 0B - 24K - SSD/Vms@ubuntu_snapshot 0B - 24K - SSD/Vms/Ubuntu@everything 0B - 24K - SSD/Vms/Ubuntu@ubuntu_snapshot 0B - 24K - SSD/Vms/libvirt@everything 0B - 24K - You can see that at first we did not have any snapshots, but after creating the first recursive snapshot we can see that we have the "@everything" snapshot on every level, we only have the "@just_docker" for the docker related folders and the only one that has a ubuntu_snapshot is the Ubuntu Vm folder Lets say we make a snapshot, then destroy the Ubuntu VM with a misguided update, we can just power it off and run zfs rollback -r SSD/Vms@ubuntu_snapshot and we are at the state the vm was before we ran the update. One can also access the snapshot (read only) via a hidden folder called .zfs root@Tower:~# ls /mnt/SSD/Vms/Ubuntu/.zfs/snapshot/ everything/ ubuntu_snapshot/ Automatic snapshots If you want automatic snapshots I recommend ZnapZend and I made a plugin available for the here: ZnapZend There is more information in the plugin thread, but to get up and running you can install this via the plugin page in the unRAID gui or install through the Community Applications. https://raw.githubusercontent.com/Steini1984/unRAID6-ZnapZend/master/unRAID6-ZnapZend.plg Then run these two commands to start and auto-start the program on boot: znapzend --logto=/var/log/znapzend.log --daemonize touch /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on Then you can turn on automatic snapshots with this command: znapzendzetup create --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD The setup is pretty readable, but this example makes automatic snapshots and keeps 24 backups a day for 7 days, 6 backups a day for a month a then a single snapshot every day for 90 days. The snapshots are also named in a easy to read format: root@Tower:~# zfs list -t snapshot SSD/Docker/HomeAssistant NAME USED AVAIL REFER MOUNTPOINT SSD/Docker/HomeAssistant@2019-11-12-000000 64.4M - 90.8M - SSD/Docker/HomeAssistant@2019-11-13-000000 46.4M - 90.9M - SSD/Docker/HomeAssistant@2019-11-13-070000 28.4M - 92.5M - SSD/Docker/HomeAssistant@2019-11-13-080000 22.5M - 92.6M - SSD/Docker/HomeAssistant@2019-11-13-090000 29.7M - 92.9M - ...... SSD/Docker/HomeAssistant@2019-11-15-094500 14.4M - 93.3M - SSD/Docker/HomeAssistant@2019-11-15-100000 14.4M - 93.4M - SSD/Docker/HomeAssistant@2019-11-15-101500 17.2M - 93.5M - SSD/Docker/HomeAssistant@2019-11-15-103000 26.8M - 93.7M - Lets say that we need to go back in time to a good configuration. We know we made a mistake after 10:01 so we can rollback to 10:00: zfs rollback -r SSD/Docker/HomeAssistant@2019-11-15-100000 Backups I have a USB connected Buffalo drive station with 2x 2tb drives which i have added for backups. I decided on a mirror and created it with this command: zpool create External mirror sdb sdc Then I created a couple of file systems: zfs create External/Backups zfs create External/Backups/Docker zfs create External/Backups/Vms zfs create External/Backups/Music zfs create External/Backups/Nextcloud zfs create External/Backups/Pictures I use Rsync for basic files (Music, Nextcloud & Pictures) and run this in my crontab: #Backups 0 12 * * * rsync -av --delete /mnt/user/Nextcloud/ /External/Backups/Nextcloud >> /dev/null 0 26 * * * rsync -av --delete /mnt/user/Music/ /External/Backups/Music >> /dev/null 1 26 * * * rsync -av --delete /mnt/user/Pictures/ /External/Backups/Pictures >> /dev/null Then I run automatic snapshots on the usb pool: (keep a years worth). znapzendzetup create --recursive SRC '14days=>1days,365days=>1weeks' External The automatic snapshots on the ZFS side make sure that I have backups of the files that are deleted between snapshots (files that are created and deleted within the day will get lost if accidentally deleted in unRAID) Replication ZnapZend supports automatic replication and I send my daily snapshots to the usb pool with these commands I have not ran into space issues... yet. But this command means a snapshot retention on the usb pool for 10 years (lets see when I need to reconsider) znapzendzetup create --send-delay=21600 --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD/Vms DST:a '90days=>1days,1years=>1weeks,10years=>1months' External/Backups/Vms znapzendzetup create --send-delay=21600 --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD/Docker DST:a '90days=>1days,1years=>1weeks,10years=>1months' External/Backups/Docker Scrub Scrubs are used to maintain the pool, kinda like parity checks and I run them from a cronjob #ZFS Scrub 30 6 * * 0 zpool scrub SSD >> /dev/null 4 2 4 * * zpool scrub External >> /dev/null New ZFS versions: The plugin check on each reboot if there is a newer version for ZFS available, download it and install it (on default settings the update check is active). If you want to disable this feature simply run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have disabled this feature already and you want to enable it run this command from a unRAID terminal: sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature needs an active internet connection on boot. If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS. New unRAID versions: Please also keep in mind that for every new unRAID version ZFS has to be compiled. I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process. Currently the process is fully automated for all plugins who need packages for each individual Kernel version. The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation. If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet. Unstable builds Now with the ZFS 2.0.0 RC series I have enabled unstable builds for those who want to try them out: *ZFS 2.0.0 is out so no need to use these builds anymore.. If you want to enable unstable builds simply run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" If you have enabled this feature already and you want to disable it run this command from a unRAID terminal: sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg" Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended). Extra reading material This hopefully got you started, but this example was based on my setup and ZFS has so much more to offer. Here are some links I wanted to Great video explantation of ZFS Linus Tech Tips on using s Setting Up a Native ZFS Pool on Unraid ZFS 101 Level1techs guide on unRAID and ZFS (great Samba guide) Creating and Destroying ZFS Storage Pools ZFS on the Arch wiki ZFS Concepts and Tutorial Znapzendzetup manual ZFS on Linux project site ZFS terminology
  11. You can let the gust SSH into the host (passwordless with keys) and shutdown the whole thing.
  12. Has your installation been stable? I ran into a lot of reiserfs errors when using passtrough (1x saslp and 1x m1015)
  13. Yeah it works but there is a pretty irritating bug regarding usb keys that is being looked at in this thread. http://lime-technology.com/forum/index.php?topic=40605.0
  14. I would love to see a v6 final VMDK posted. I will follow this thread in hopes it happen! https://drive.google.com/file/d/0B93BSpm4tzDMTHo0Nmk0RHVzcTA/view?usp=sharing
  15. Corsair Voyager mini usb 2.0 using esxi 6.0 Same problem, had to revert to unraid 5
  16. Hey guys I want to update to unraid 6 tomorrow but I wanted to see if cache_dirs was still a seperate package since so many things have been packaged into the default system. On a test rig I tried this plugin https://github.com/bergware/dynamix/blob/master/unRAIDv6/dynamix.cache.dirs.plg but it caused a really high cpu usage and a load over 2 I remember it happening on a different rig when I tried running the script manually on one of the early betas. So what are people doing on Unraid 6?
  17. You can run a webserver (apache for example) on your pc/laptop and run phpvirtualbox from there.
  18. Hey guys I would like to be able to use a openvpn client on my unraid box but only route specific traffic through it. Everything would go through the normal network accept a short list of addresses that would go through the vpn connection. First off, is that possible? Does anyone have any guidelines for me how I could make this happen in Unraid? Secondly i was wondering about security. If i would leave the vpn connected am I compromising the security of the box? Could I be sharing my unraid web console etc with other users on the vpn (bought subscription)? [EDIT] Solved: see this thread http://lime-technology.com/forum/index.php?topic=19439.msg205060#msg205060
×
×
  • Create New...