ezra

Members
  • Content Count

    31
  • Joined

  • Last visited

Community Reputation

13 Good

About ezra

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @Steini84 Can't we get you to do a webinar on your ZFS setup on your unraid server? So to be clear, 1. Update plugin 2. Upgrade unraid to rc1 That will keep everything as is, in theory? Thanks for your work on this plugin, much appreciated.
  2. It should be as easy as removing the ZFS plugin and upgrading to beta35 and do the side load right? Does anyone know if they are also working on GUI/array support with ZFS? Would love to contribute to the development but not a clue where. Unraid with zfs array support would solve all my needs! This is the first frontend i've seen next to FreeNAS for ZFS: https://github.com/optimans/cockpit-zfs-manager I use it for my proxmox servers, though up until this point i only managed ZFS via cli which is still fine. But i love the snapshot/pool creation from the UI!
  3. Use -m during import to specify mount point not -d. Afterwards use "zfs set mountpoint=/path pool". Usefull command: zfs --help and zpool --help.
  4. I'm trying to figure this out as well. The current unraid plugin docker fails to download. I'll report back if i find out anything.
  5. Oh my god, sorry for wasting your time... Totally overlooked that.
  6. Hello stein, well problem is i've setup those snapshot rules 3 days ago, nothing is added to: zfs list -r -t snapshot I'll see if run once will trigger the automated backups to start. Thank you.
  7. Hello! I've installed the plugin for someone else, now on his unraid 6.3 we dont see any snapshots that are created by znapzend. Reinstalling did not help. *** backup plan: HDD *** enabled = on mbuffer = off mbuffer_size = 1G post_znap_cmd = off pre_znap_cmd = off recursive = on src = HDD src_plan = 24hours=>2hours,7days=>1day,30days=>7days,90days=>30days tsformat = %Y-%m-%d-%H%M%S zend_delay = 0 *** backup plan: NVME *** dst_0 = HDD/Backup/NVME dst_0_plan = 1day=>
  8. it only imports the pool. just delete the folder and reboot to see if its still there. Should just be leftover or unknown typo.
  9. first try: umount /mnt/ssd500gb If this output is something like: directory is not mounted, then: rm -r /mnt/ssd500gb (will delete the entire folder, so make sure there's nothing in there) then or before check with: df -h If /mnt/ssd500gb is listed somewhere, and /mnt/disks/ssd500gb also
  10. For me destroying the pool does the job, you can try to reinstall the zfs plugin and issue: zpool status or zpool import -a and see if there is still something left. For all the others i have found out how to use zvol's for VM storage (so you can make use of snapshots, with raw .img you cant, i only had succes with qcow2 on ubuntu/debian servers, desktop failed to do snapshots on qcow2) zfs create -V 50G pool/zvolname then set the VM config for disk to manual: /dev/zvol/pool/zvolname And the type to virtio or sata (whatever works for you, virtio still t
  11. Hello, I'm trying to install OPNsense (hardened BSD) i've tried seabios/omvf i44 and Q35, nothing works. Or i'm able to start the VM and i get this to this screen and then it hangs: Or i can't even save the VM with I44 giving this error: `XML error: The PCI controller with index='0' must be model='pci-root' for this machine type, but model='pcie-root' was found instead` My working until frozen config: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>OPNsense</name> <uuid>4e1ca7a9-0
  12. To work around this add a variable like below. Just use the latest image. edit: https://nginxproxymanager.com/advanced-config/
  13. Hello all, anyone any expierence with zfs disk images? to use for VM's https://docs.oracle.com/cd/E69554_01/html/E69557/storingdiskimageswithzfs.html Would be great if we can snapshots the vm's, now im snapshotting the qemu .img but im not sure that works as i think.
  14. Hello! I'm trying to get the sub containers of homeassistant_supervisor to be setup with a bridged VLAN, i can't seem to edit settings within the docker tab on unraid. The hassio_dns has network = default, homeassistant has network = host. I'd like to set that to br:3, does anyone know how? Also, do i need to change the underlying config? @MikelillOi check this thread