BasWeg

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BasWeg's Achievements

Newbie

Newbie (1/14)

9

Reputation

  1. Does the user 99:100 (nobody:users) have the correct rights in the folder citadel/vsphere?
  2. An example is documented in the settings. I've my docker files in /mnt/SingleSSD/docker/ zpool is SingleSSD Dataset is docker, so the working pattern for exclusion is: /^SingleSSD\/docker\/.*/
  3. I've done this via dataset properties. To share a dataset: zfs set sharenfs='rw=@<IP_RANGE>,fsid=<FileSystemID>,anongid=100,anonuid=99,all_squash' <DATASET> <IP_RANGE> is something like 192.168.0.0/24, to restrict rw access. Just have a look to the nfs share properties. <FileSystemID> is an unique ID you need to set. I've started with 1 and with every shared dataset I've increased the number <DATASET> dataset you want to share. The magic was the FileSystemID, without setting this ID, it was not possible to connect from any client. To unshare a dataset, you can easily set: zfs set sharenfs=off <DATASET>
  4. from https://daveparrish.net/posts/2020-11-10-Managing-ZFS-Snapshots-ignore-Docker-snapshots.html so, I think it is wanted as it is
  5. Sorry for the silly question... how can I change from docker image file to folder?
  6. ok, but can I leave the docker.img in ZFS, or should I put in one extra drive for unraid array?
  7. I've following configuration: root@UnraidServer:~# cat /sys/module/zfs/version 2.0.0-1 root@UnraidServer:~# dmesg | grep -i zfs [ 56.483956] ZFS: Loaded module v2.0.0-1, ZFS pool version 5000, ZFS filesystem version 5 [1073920.595334] Modules linked in: iscsi_target_mod target_core_user target_core_pscsi target_core_file target_core_iblock dummy xt_mark xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd lockd grace sunrpc md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) it87 hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding amd64_edac_mod edac_mce_amd kvm_amd kvm wmi_bmof crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel r8125(O) aesni_intel crypto_simd mpt3sas cryptd r8169 glue_helper raid_class i2c_piix4 ahci nvme realtek nvme_core ftdi_sio scsi_transport_sas rapl i2c_core wmi k10temp ccp libahci usbserial acpi_cpufreq button So, what should I do? Change to unstable and see if it still works?
  8. Yes. You need one drive - for me an USB stick - as array to start services.
  9. For me it is the same. Only zfs. I hope I can leave it as it is.
  10. For me it does not work. All of my unassigned SSDs are configured within /boot/config/Smart-one.cfg But the notification is related to the default temp for HDDs
  11. No, you need to execute the command for every dataset inside the pool. https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=917605 I'm using following script in UserScripts to be executed at "First Start Array only" #!/bin/bash #from testdasi (https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=875342) #echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab #just dump everything in for n in $(zfs list -H -o name) do echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab done
  12. Which version of ZFS do you have working with VM and Dockers now?
  13. The original smb reference is pretty old. And one topic later testdasi had a solution
  14. to be honest, no. I've nothing to do with linux internal stuff