• Posts

  • Joined

  • Last visited

Everything posted by BasWeg

  1. Does the user 99:100 (nobody:users) have the correct rights in the folder citadel/vsphere?
  2. An example is documented in the settings. I've my docker files in /mnt/SingleSSD/docker/ zpool is SingleSSD Dataset is docker, so the working pattern for exclusion is: /^SingleSSD\/docker\/.*/
  3. I've done this via dataset properties. To share a dataset: zfs set sharenfs='rw=@<IP_RANGE>,fsid=<FileSystemID>,anongid=100,anonuid=99,all_squash' <DATASET> <IP_RANGE> is something like, to restrict rw access. Just have a look to the nfs share properties. <FileSystemID> is an unique ID you need to set. I've started with 1 and with every shared dataset I've increased the number <DATASET> dataset you want to share. The magic was the FileSystemID, without setting this ID, it was not possible to connect from any client. To unshare a dataset, you can easily set: zfs set sharenfs=off <DATASET>
  4. from so, I think it is wanted as it is
  5. Sorry for the silly question... how can I change from docker image file to folder?
  6. ok, but can I leave the docker.img in ZFS, or should I put in one extra drive for unraid array?
  7. I've following configuration: root@UnraidServer:~# cat /sys/module/zfs/version 2.0.0-1 root@UnraidServer:~# dmesg | grep -i zfs [ 56.483956] ZFS: Loaded module v2.0.0-1, ZFS pool version 5000, ZFS filesystem version 5 [1073920.595334] Modules linked in: iscsi_target_mod target_core_user target_core_pscsi target_core_file target_core_iblock dummy xt_mark xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd lockd grace sunrpc md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) it87 hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding amd64_edac_mod edac_mce_amd kvm_amd kvm wmi_bmof crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel r8125(O) aesni_intel crypto_simd mpt3sas cryptd r8169 glue_helper raid_class i2c_piix4 ahci nvme realtek nvme_core ftdi_sio scsi_transport_sas rapl i2c_core wmi k10temp ccp libahci usbserial acpi_cpufreq button So, what should I do? Change to unstable and see if it still works?
  8. Yes. You need one drive - for me an USB stick - as array to start services.
  9. For me it is the same. Only zfs. I hope I can leave it as it is.
  10. For me it does not work. All of my unassigned SSDs are configured within /boot/config/Smart-one.cfg But the notification is related to the default temp for HDDs
  11. No, you need to execute the command for every dataset inside the pool. I'm using following script in UserScripts to be executed at "First Start Array only" #!/bin/bash #from testdasi ( #echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab #just dump everything in for n in $(zfs list -H -o name) do echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab done
  12. Which version of ZFS do you have working with VM and Dockers now?
  13. The original smb reference is pretty old. And one topic later testdasi had a solution
  14. to be honest, no. I've nothing to do with linux internal stuff
  15. Hi, please check this post This is the workaround, I'm using. best regards Bastian
  16. So, what's about the other .img files, used for VMs and so on? Are there any issues? I'm still at RC1 with zfs-2.0.0-1
  17. All this samba staff is pretty well explained in this referenced topic
  18. So, at the moment it is better not to update? I'm still on RC1 with ZFS 2.0.0
  19. yep, that is correct. The dashboard is fine... the notification is the trouble
  20. In my "smart-one.cfg", I can see my configuration for the unassigned disks I use with ZFS [Samsung_SSD_860_EVO_250GB_xxx] hotTemp="55" maxTemp="65" [Samsung_SSD_860_EVO_250GB_xxx] hotTemp="55" maxTemp="65" [Samsung_SSD_860_EVO_500GB_xxx] hotTemp="55" maxTemp="65" [Samsung_SSD_860_EVO_250GB_xxx] hotTemp="55" maxTemp="65" [Samsung_SSD_970_EVO_Plus_1TB_xxx] hotTemp="55" maxTemp="65" but the unraid system doesn't care and complains all the time using the default settings.
  21. So, use_unstable_build is not needed any more to get version 2.0.0?
  22. Thanks for the hint! I've made a user script with following content in order to run after first array start: #!/bin/bash #from testdasi ( #echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab #just dump everything in for n in $(zfs list -H -o name) do echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab done
  23. Is it possible to get rid of / waiver this error/warning? I know that I'm using an unstable version, and everytime after a reboot this shows up and shocks me. 🙃
  24. Is a reboot required to get the new update, or is the other way - delete and reinstall the plug-in - still working?