Jump to content

ezra

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ezra's Achievements

Noob

Noob (1/14)

13

Reputation

  1. @Steini84 Can't we get you to do a webinar on your ZFS setup on your unraid server? So to be clear, 1. Update plugin 2. Upgrade unraid to rc1 That will keep everything as is, in theory? Thanks for your work on this plugin, much appreciated.
  2. It should be as easy as removing the ZFS plugin and upgrading to beta35 and do the side load right? Does anyone know if they are also working on GUI/array support with ZFS? Would love to contribute to the development but not a clue where. Unraid with zfs array support would solve all my needs! This is the first frontend i've seen next to FreeNAS for ZFS: https://github.com/optimans/cockpit-zfs-manager I use it for my proxmox servers, though up until this point i only managed ZFS via cli which is still fine. But i love the snapshot/pool creation from the UI!
  3. Use -m during import to specify mount point not -d. Afterwards use "zfs set mountpoint=/path pool". Usefull command: zfs --help and zpool --help.
  4. I'm trying to figure this out as well. The current unraid plugin docker fails to download. I'll report back if i find out anything.
  5. Oh my god, sorry for wasting your time... Totally overlooked that.
  6. Hello stein, well problem is i've setup those snapshot rules 3 days ago, nothing is added to: zfs list -r -t snapshot I'll see if run once will trigger the automated backups to start. Thank you.
  7. Hello! I've installed the plugin for someone else, now on his unraid 6.3 we dont see any snapshots that are created by znapzend. Reinstalling did not help. *** backup plan: HDD *** enabled = on mbuffer = off mbuffer_size = 1G post_znap_cmd = off pre_znap_cmd = off recursive = on src = HDD src_plan = 24hours=>2hours,7days=>1day,30days=>7days,90days=>30days tsformat = %Y-%m-%d-%H%M%S zend_delay = 0 *** backup plan: NVME *** dst_0 = HDD/Backup/NVME dst_0_plan = 1day=>6hours enabled = on mbuffer = off mbuffer_size = 1G post_znap_cmd = off pre_znap_cmd = off recursive = on src = NVME src_plan = 24hours=>2hours,7days=>1day,30days=>7days,90days=>30days tsformat = %Y-%m-%d-%H%M%S zend_delay = 0 I've executed after creating: pkill -HUP znapzend Please advise.
  8. it only imports the pool. just delete the folder and reboot to see if its still there. Should just be leftover or unknown typo.
  9. first try: umount /mnt/ssd500gb If this output is something like: directory is not mounted, then: rm -r /mnt/ssd500gb (will delete the entire folder, so make sure there's nothing in there) then or before check with: df -h If /mnt/ssd500gb is listed somewhere, and /mnt/disks/ssd500gb also
  10. For me destroying the pool does the job, you can try to reinstall the zfs plugin and issue: zpool status or zpool import -a and see if there is still something left. For all the others i have found out how to use zvol's for VM storage (so you can make use of snapshots, with raw .img you cant, i only had succes with qcow2 on ubuntu/debian servers, desktop failed to do snapshots on qcow2) zfs create -V 50G pool/zvolname then set the VM config for disk to manual: /dev/zvol/pool/zvolname And the type to virtio or sata (whatever works for you, virtio still the best performance wise) I've also figured out how to snapshot the right way with znapzendzetup also provided as unraid plugin by stein84, also to different datasets to ensure uptime of servers. If anyone needs a hand, let me know.
  11. Hello, I'm trying to install OPNsense (hardened BSD) i've tried seabios/omvf i44 and Q35, nothing works. Or i'm able to start the VM and i get this to this screen and then it hangs: Or i can't even save the VM with I44 giving this error: `XML error: The PCI controller with index='0' must be model='pci-root' for this machine type, but model='pcie-root' was found instead` My working until frozen config: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>OPNsense</name> <uuid>4e1ca7a9-0912-97d2-239b-97b1c384522e</uuid> <metadata> <vmtemplate xmlns="unraid" name="FreeBSD" icon="freebsd.png" os="freebsd"/> </metadata> <memory unit='KiB'>3145728</memory> <currentMemory unit='KiB'>3145728</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/HDD/Software/ISO/OPNsense-20.1-OpenSSL-dvd-amd64.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/SSD/VMs/OPNsense/vdisk1.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <interface type='bridge'> <mac address='52:54:00:63:28:a5'/> <source bridge='br0.900'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> Could anyone please advise?
  12. To work around this add a variable like below. Just use the latest image. edit: https://nginxproxymanager.com/advanced-config/
  13. Hello all, anyone any expierence with zfs disk images? to use for VM's https://docs.oracle.com/cd/E69554_01/html/E69557/storingdiskimageswithzfs.html Would be great if we can snapshots the vm's, now im snapshotting the qemu .img but im not sure that works as i think.
  14. Hello! I'm trying to get the sub containers of homeassistant_supervisor to be setup with a bridged VLAN, i can't seem to edit settings within the docker tab on unraid. The hassio_dns has network = default, homeassistant has network = host. I'd like to set that to br:3, does anyone know how? Also, do i need to change the underlying config? @MikelillOi check this thread
×
×
  • Create New...