• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BasWeg's Achievements


Rookie (2/14)



  1. Hi, I've just updated the plugin and corrected the exclusion pattern. Nevertheless the Main Page does show following warning: "Warning: session_start(): Cannot start session when headers already sent in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(522) : eval()'d code on line 3" Any idea? (I've cleared the browser cache of course) UNRAID Version 6.9.2
  2. Ah ok. It was possible in the past, but since zfs is a kernel Module it is not possible anymore.
  3. is it possible to update without reboot?
  4. Does the user 99:100 (nobody:users) have the correct rights in the folder citadel/vsphere?
  5. An example is documented in the settings. I've my docker files in /mnt/SingleSSD/docker/ zpool is SingleSSD Dataset is docker, so the working pattern for exclusion is: /^SingleSSD\/docker\/.*/
  6. I've done this via dataset properties. To share a dataset: zfs set sharenfs='rw=@<IP_RANGE>,fsid=<FileSystemID>,anongid=100,anonuid=99,all_squash' <DATASET> <IP_RANGE> is something like, to restrict rw access. Just have a look to the nfs share properties. <FileSystemID> is an unique ID you need to set. I've started with 1 and with every shared dataset I've increased the number <DATASET> dataset you want to share. The magic was the FileSystemID, without setting this ID, it was not possible to connect from any client. To unshare a dataset, you can easily set: zfs set sharenfs=off <DATASET>
  7. from so, I think it is wanted as it is
  8. Sorry for the silly question... how can I change from docker image file to folder?
  9. ok, but can I leave the docker.img in ZFS, or should I put in one extra drive for unraid array?
  10. I've following configuration: root@UnraidServer:~# cat /sys/module/zfs/version 2.0.0-1 root@UnraidServer:~# dmesg | grep -i zfs [ 56.483956] ZFS: Loaded module v2.0.0-1, ZFS pool version 5000, ZFS filesystem version 5 [1073920.595334] Modules linked in: iscsi_target_mod target_core_user target_core_pscsi target_core_file target_core_iblock dummy xt_mark xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd lockd grace sunrpc md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) it87 hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding amd64_edac_mod edac_mce_amd kvm_amd kvm wmi_bmof crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel r8125(O) aesni_intel crypto_simd mpt3sas cryptd r8169 glue_helper raid_class i2c_piix4 ahci nvme realtek nvme_core ftdi_sio scsi_transport_sas rapl i2c_core wmi k10temp ccp libahci usbserial acpi_cpufreq button So, what should I do? Change to unstable and see if it still works?
  11. Yes. You need one drive - for me an USB stick - as array to start services.
  12. For me it is the same. Only zfs. I hope I can leave it as it is.
  13. For me it does not work. All of my unassigned SSDs are configured within /boot/config/Smart-one.cfg But the notification is related to the default temp for HDDs
  14. No, you need to execute the command for every dataset inside the pool. I'm using following script in UserScripts to be executed at "First Start Array only" #!/bin/bash #from testdasi ( #echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab #just dump everything in for n in $(zfs list -H -o name) do echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab done