BasWeg
Members-
Posts
49 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by BasWeg
-
Also bei mir funktioniert es erst seit 6.12.6 (hatte vorher 6.11.5). Meine einzigste Änderung ist in sylinux cfg das amd_pstate=passive Im go file war kein modprobe nötig label Unraid OS menu default kernel /bzimage append rcu_nocbs=0-15 isolcpus=6-7,14-15 pcie_acs_override=downstream,multifunction initrd=/bzroot amd_pstate=passive vfio_iommu_type1.allow_unsafe_interrupts=1 video=efifb:off idle=halt
-
I've created a new VM, using the old vdisk. Added the same xml for graphic and sound. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/SSD/Vms/Bios/Asus.GT710.2048.170525.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> And its working. 🤷♂️ The old VM with the same config is not.
-
Hi, I've upgraded today from 6.11.5 to 6.12.6 I've following issue now. I can not start any VM in the GUI. Only at some linux VMs, I can use the "Start with Conosle (VNC)". The normal Start, Restart, Pause, Hibernate, Stop, force stop does not work any more -> no reaction. The dialog stays open. For my Win10 VM, I can enable Autostart, disable/enable VM itself, then Win10 VM is running. It is not possible to remove any VM either. The gui is asking if I want to proceed, but clicking at proceed is ignored. (I've flushed the cache already) An additional Problem, the primary gpu passthrough is not working any more in 6.12.6unraidserver-diagnostics-20231221-1013.zip I hope, you habe any ideas best regards Bastian unraidserver-diagnostics-20231221-1013.zip
-
Since I also use the znapzend plugin, does your solution just work with the old stored znapzend configuration?
-
I'm going to try @Marshalleqsolution: https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=1250880 and
-
ok, then I need to rename all my dockers, too. NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT HDD 5.45T 1.79T 3.67T - - 2% 32% 1.00x ONLINE - NVME 928G 274G 654G - - 8% 29% 1.00x ONLINE - SSD 696G 254G 442G - - 13% 36% 1.00x ONLINE - SingleSSD 464G 16.6G 447G - - 1% 3% 1.00x ONLINE - HDD and SSD are raidz1-0 with 3 discs each. So, to update, I should remember the zpool status (UIDs for each array), and create arrays afterwards with these UIDS?
-
Me, too. But I'm afraid if this is still working with the build in version of zfs. So, I'm still on the latest release with zfs plug-in. My understanding is, that my solution to have the mountpoints to /mnt/ is not ok anymore?
-
Hi, sorry, but I do not get this working for my Ryzen 3700X I've added modprobe.blacklist=acpi_cpufreq amd_pstate.shared_mem=1 and of course the modprobe amd_pstate amd_pstate is loaded, but cpufreq-info shows: analyzing CPU 0: driver: amd-pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 131 us. available cpufreq governors: corefreq-policy, conservative, ondemand, userspace, powersave, performance, schedutil current CPU frequency is 3.59 GHz. analyzing CPU 1: driver: amd-pstate and if I look into corefreq-cli the system is always running at full speed. before the system has a minimum of 2200MHz. Tips and Tweaks tells me, that there is no governor Any Idea? best regards Bastian
-
Hi, I've just updated the plugin and corrected the exclusion pattern. Nevertheless the Main Page does show following warning: "Warning: session_start(): Cannot start session when headers already sent in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(522) : eval()'d code on line 3" Any idea? (I've cleared the browser cache of course) UNRAID Version 6.9.2
-
Ah ok. It was possible in the past, but since zfs is a kernel Module it is not possible anymore.
-
is it possible to update without reboot?
-
Does the user 99:100 (nobody:users) have the correct rights in the folder citadel/vsphere?
-
An example is documented in the settings. I've my docker files in /mnt/SingleSSD/docker/ zpool is SingleSSD Dataset is docker, so the working pattern for exclusion is: /^SingleSSD\/docker\/.*/
-
I've done this via dataset properties. To share a dataset: zfs set sharenfs='rw=@<IP_RANGE>,fsid=<FileSystemID>,anongid=100,anonuid=99,all_squash' <DATASET> <IP_RANGE> is something like 192.168.0.0/24, to restrict rw access. Just have a look to the nfs share properties. <FileSystemID> is an unique ID you need to set. I've started with 1 and with every shared dataset I've increased the number <DATASET> dataset you want to share. The magic was the FileSystemID, without setting this ID, it was not possible to connect from any client. To unshare a dataset, you can easily set: zfs set sharenfs=off <DATASET>
-
from https://daveparrish.net/posts/2020-11-10-Managing-ZFS-Snapshots-ignore-Docker-snapshots.html so, I think it is wanted as it is
-
Sorry for the silly question... how can I change from docker image file to folder?
-
ok, but can I leave the docker.img in ZFS, or should I put in one extra drive for unraid array?
-
I've following configuration: root@UnraidServer:~# cat /sys/module/zfs/version 2.0.0-1 root@UnraidServer:~# dmesg | grep -i zfs [ 56.483956] ZFS: Loaded module v2.0.0-1, ZFS pool version 5000, ZFS filesystem version 5 [1073920.595334] Modules linked in: iscsi_target_mod target_core_user target_core_pscsi target_core_file target_core_iblock dummy xt_mark xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd lockd grace sunrpc md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) it87 hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding amd64_edac_mod edac_mce_amd kvm_amd kvm wmi_bmof crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel r8125(O) aesni_intel crypto_simd mpt3sas cryptd r8169 glue_helper raid_class i2c_piix4 ahci nvme realtek nvme_core ftdi_sio scsi_transport_sas rapl i2c_core wmi k10temp ccp libahci usbserial acpi_cpufreq button So, what should I do? Change to unstable and see if it still works?
-
Yes. You need one drive - for me an USB stick - as array to start services.
-
For me it is the same. Only zfs. I hope I can leave it as it is.
-
[6.9.1] Setting warning/critical temperature in SMART settings not possible
BasWeg commented on hawihoney's report in Stable Releases
For me it does not work. All of my unassigned SSDs are configured within /boot/config/Smart-one.cfg But the notification is related to the default temp for HDDs -
No, you need to execute the command for every dataset inside the pool. https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=917605 I'm using following script in UserScripts to be executed at "First Start Array only" #!/bin/bash #from testdasi (https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=875342) #echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab #just dump everything in for n in $(zfs list -H -o name) do echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab done
-
Which version of ZFS do you have working with VM and Dockers now?