segator
-
Posts
175 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by segator
-
-
oh.. sorry what a newbie I am...
yes says smart ok, so I supose if in the future some issues are detected I will be notified as any disk in the array, right¿?BTW someone tried ZFS over USB disks? is that a crazy idea? i am out of space on my rack server and i'm considering buying a 4slots usb enclosure. any other idea if not?
-
-
question, how do you guys monitor the disk smart when using ZFS?
unassgined devices plugin is not monitoring disks that are not mounted right? -
Memory issue fixed increasing blocksize in the dataset to 1M ashift=12 disabled dedup and compression on not needed volumes (like multimedia vols)
-15GB of usage.
@Marshalleq which error you have, running latest beta with gaming VM and ZFS RC2 (NAS + DEsktop PC all in one) all working fine after my fixes with the memory.
in your logs I saw you have disabled the xattr?
- 1
-
oh thats good point, I have also a dataset with some media files, i supose i will need to create a new one with dedup disabled and move data there, right?
-
yes I remember readed about 1gb per TB of data in my case I have 10.4T of data thats then 10.4GB of ram
so If I sum my VM ram + arc + dedup
2412
11
means out of ram tootally..
nevertheless I reduced now to
20
8
11
and i'm still on 0gb of free ram
-
in my case have a lot of benefits because the type of data I store on my volume.
I build code and archive all the builds so the deduplication ratio is so high as the diference between build are a couple of kb.
Anyway as far as I understood, teorically when linux need memory, ZFS free some memory of the arc cache, but its seems its not doing this, could be?
also, do you know how this parameters affect when using ZFS on unraid?
reccommended values in tip & tweak unraid plugin are:
vm.dirty_background_ratio = 2 % vm.dirty_ratio = 3 %
-
how is this possible?
i have 24gb for the VM
10GB for ARC cache
and running nextcloud with redis and mysql and some rubish dockers.
how can I see the real ram usage of ZFS? (arc + metadata cache + dedup.. )
I have 10.4TB on my zfs vol btw
-
total used free shared buff/cache available Mem: 47 42 2 1 2 2 Swap: 0 0 0
-
time read miss miss% dmis dm% pmis pm% mmis mm% size c avail 11:24:21 0 0 0 0 0 0 0 0 0 12G 12G 3.3G
-
Hey using from a couple of weeks ZFS on unraid with gaming VM as primary desktop pc (nas + desktop pc all in one), it works fine but sometimes unraid decides to kill my VM because "out of memory" i assigned 16gb of ram to the VM and the host have 64gb of ram, I think zfs is not cleaning enough fast the arc memory when other containers reclaim memory and then kernel decides to kill my VM
i can fix it using hugepages but i don't like it because then its memory that ZFS can not use it when the VM is shutted down (that at the end is the 90% of the time), I tried to limit ZFS arc with echo 12884901888 >> /sys/module/zfs/parameters/zfs_arc_max but same
-
nope, I still have same issue
but the machine is more or less usable if there are not extreme load on the virtual ethernet device.
-
Ops I forgot to mention
this error d: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5 appears always that unraid trying to stop the SAS disks, thats why i'm saying it's not working as it stays on loop forever until log is full in a couple of weeks
-
is still spaming on my log
Oct 3 12:11:48 segator-unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin install https://raw.githubusercontent.com/doron1/unraid-sas-spindown/master/sas-spindown.plg Oct 3 12:11:48 segator-unraid root: plugin: running: anonymous Oct 3 12:11:48 segator-unraid root: plugin: creating: /usr/local/emhttp/plugins/sas-spindown/README.md - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /usr/local/emhttp/plugins/sas-spindown/README.md - mode to 644 Oct 3 12:11:48 segator-unraid root: plugin: creating: /usr/local/bin/unraid-spinsasdown - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /usr/local/bin/unraid-spinsasdown - mode to 755 Oct 3 12:11:48 segator-unraid root: plugin: creating: /usr/sbin/smartctl.wrapper - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /usr/sbin/smartctl.wrapper - mode to 755 Oct 3 12:11:48 segator-unraid root: plugin: creating: /etc/rsyslog.d/99-spinsasdown.conf - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /etc/rsyslog.d/99-spinsasdown.conf - mode to 644 Oct 3 12:11:48 segator-unraid root: plugin: running: anonymous Oct 3 12:11:48 segator-unraid sas-spindown plugin: Installing a wrapper for smartctl... Oct 3 12:11:48 segator-unraid sas-spindown plugin: Installing syslog hook for SAS device spindown... Oct 3 12:11:49 segator-unraid kernel: mdcmd (1518646): spindown 4 Oct 3 12:11:49 segator-unraid kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5 Oct 3 12:11:49 segator-unraid kernel: mdcmd (1518647): spindown 7 Oct 3 12:11:49 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:11:49 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:11:49 segator-unraid kernel: md: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5 Oct 3 12:11:50 segator-unraid rsyslogd: [origin software="rsyslogd" swVersion="8.1908.0" x-pid="27295" x-info="https://www.rsyslog.com"] start Oct 3 12:11:51 segator-unraid kernel: mdcmd (1518648): spindown 4 Oct 3 12:11:52 segator-unraid SAS Assist v0.5[27387]: spinning down slot 4, device /dev/sdd (/dev/sg3) Oct 3 12:11:54 segator-unraid SAS Assist v0.5[27435]: spinning down slot 7, device /dev/sdg (/dev/sg6) Oct 3 12:11:56 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:11:56 segator-unraid kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5 Oct 3 12:11:56 segator-unraid kernel: mdcmd (1518649): spindown 7 Oct 3 12:11:58 segator-unraid SAS Assist v0.5[27517]: spinning down slot 4, device /dev/sdd (/dev/sg3) Oct 3 12:12:02 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:12:02 segator-unraid kernel: md: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5 Oct 3 12:12:03 segator-unraid kernel: mdcmd (1518650): spindown 4 Oct 3 12:12:04 segator-unraid SAS Assist v0.5[27826]: spinning down slot 7, device /dev/sdg (/dev/sg6) Oct 3 12:12:06 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:12:06 segator-unraid kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5 Oct 3 12:12:06 segator-unraid kernel: mdcmd (1518651): spindown 7 Oct 3 12:12:08 segator-unraid SAS Assist v0.5[27904]: spinning down slot 4, device /dev/sdd (/dev/sg3)
-
just tried and I see 2 things
running on ryzen 3900x unraid 6.9 beta29
using host-model it shows this error
error: internal error: qemu unexpectedly closed the monitor: 2020-10-02T13:27:56.354809Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.nrip-save [bit 3] 2020-10-02T13:27:56.355212Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.npt [bit 0] 2020-10-02T13:27:56.355217Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.nrip-save [bit 3]
and using host-passtrhough
i see this other error
error: Failed to start domain IsaacGaming error: internal error: process exited while connecting to monitor: 2020-10-02T13:21:20.955765Z qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,queue-size=1024,tag=user,bus=pci.1,addr=0x19: Failed to write msg. Wrote -1 instead of 12. 2020-10-02T13:21:20.955793Z qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,queue-size=1024,tag=user,bus=pci.1,addr=0x19: vhost_dev_init failed: Operation not permitted
I supose i need to modify some permissions but not sure which of them
-
I don't know if maybe this could be your error:
RTX 2XXX also have a usb device, maybe is the same with RTX 3XXX if this is the case ensure to add it, also remember to use the "multifunction" there is a video from spaceinvader one that explains how to passthrough correctly RTX 2xxx.
-
I partially fixed it adding this on the virsh file <vcpusched vcpus='0-11' scheduler='fifo' priority='99'/>
now i can run high load on host that the VM is not affected 200 points drop on cinebench r20 in VM when running full cpu stress test on host
but now i notice I have the sttuters only when high network usage, if for example I run iperf3 between host and guest I got 1.27gbps when I should got 10gbps and the VM is completely unusable meanwhile the test is running
I tried to isolate the networks, first one for internet conneciton and second one for guest-host only network(NFS, smb.. )
Any idea how to fix the stutters?
<interface type='bridge'> <mac address='52:54:00:e4:b2:06'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x15' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:d0:c9:ca'/> <source bridge='virbr0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x14' function='0x0'/> </interface>
-
Same here, i running latest beta29, if i need to test something let me know. i also have NVME on host
-
lot of sututtering when exist soft load on host, I tried change vcore threads priority to -20/-10,pinning, no pinning.. the only think works is cpu isolation but its not cool as when i'm shutting down the VM i want this cores availables for other applications(like ffmpeg video encoding on the night)
i'm trying to play with chrt to change the scheduler priority without sucess
root@Tower:~# pidof qemu-system-x86_64 15635 root@Tower:~# chrt -f -p 1 15635 chrt: failed to set pid 15635's policy: Operation not permitted
any idea, suggestion?
ryzen 3900x
MSI tomahawk x570 wifi
64gb ram ddr4 3200 cl16
I readed a lot and played diferent things and unraid versions and always the same
currently running 6.9.0 beta29 with cpu pinning
passing thourgh onboard audio device, usb controller and a nvidia RTX 2070
qemu command
LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming \ XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming/.local/share \ XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming/.cache \ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming/.config \ QEMU_AUDIO_DRV=none \ /usr/local/sbin/qemu \ -name guest=IsaacGaming,debug-threads=on \ -S \ -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-IsaacGaming/master-key.aes \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/9239375a-31b7-de02-9205-3209712e6715_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-i440fx-5.1,accel=kvm,usb=off,vmport=off,dump-guest-core=off,mem-merge=off,kernel_irqchip=on,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -cpu host,migratable=on,hypervisor=on,topoext=on,svm=on,invtsc=on,kvmclock=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-vendor-id=1234567890ab,hv-frequencies,kvm=off,host-cache-info=on,l3-cache=off \ -m 16384 \ -overcommit mem-lock=off \ -smp 1,maxcpus=16,sockets=1,dies=1,cores=8,threads=2 \ -object iothread,id=iothread1 \ -object iothread,id=iothread2 \ -uuid ed8709ad-6ea8-ca1c-a980-2cf05d72f688 \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=30,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x4 \ -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x5 \ -device pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0x8 \ -device pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0x9 \ -device pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xa \ -device pci-bridge,chassis_nr=6,id=pci.6,bus=pci.0,addr=0xb \ -device pci-bridge,chassis_nr=7,id=pci.7,bus=pci.0,addr=0xc \ -device pci-bridge,chassis_nr=8,id=pci.8,bus=pci.0,addr=0xd \ -device pci-bridge,chassis_nr=9,id=pci.9,bus=pci.0,addr=0xe \ -device pci-bridge,chassis_nr=10,id=pci.10,bus=pci.0,addr=0xf \ -device pci-bridge,chassis_nr=11,id=pci.11,bus=pci.0,addr=0x10 \ -device pci-bridge,chassis_nr=12,id=pci.12,bus=pci.0,addr=0x11 \ -device pci-bridge,chassis_nr=13,id=pci.13,bus=pci.0,addr=0x12 \ -device pci-bridge,chassis_nr=14,id=pci.14,bus=pci.0,addr=0x13 \ -device pci-bridge,chassis_nr=15,id=pci.15,bus=pci.0,addr=0x14 \ -device pci-bridge,chassis_nr=16,id=pci.16,bus=pci.0,addr=0x15 \ -device pci-bridge,chassis_nr=17,id=pci.17,bus=pci.0,addr=0x16 \ -device pci-bridge,chassis_nr=18,id=pci.18,bus=pci.0,addr=0x17 \ -device pci-bridge,chassis_nr=19,id=pci.19,bus=pci.0,addr=0x18 \ -device pci-bridge,chassis_nr=20,id=pci.20,bus=pci.0,addr=0x19 \ -device pci-bridge,chassis_nr=21,id=pci.21,bus=pci.0,addr=0x1a \ -device pci-bridge,chassis_nr=22,id=pci.22,bus=pci.0,addr=0x1b \ -device pci-bridge,chassis_nr=23,id=pci.23,bus=pci.0,addr=0x1c \ -device pci-bridge,chassis_nr=24,id=pci.24,bus=pci.0,addr=0x1d \ -device pci-bridge,chassis_nr=25,id=pci.25,bus=pci.0,addr=0x1e \ -device pci-bridge,chassis_nr=26,id=pci.26,bus=pci.0,addr=0x1f \ -device pci-bridge,chassis_nr=27,id=pci.27,bus=pci.1,addr=0x2 \ -device pci-bridge,chassis_nr=28,id=pci.28,bus=pci.1,addr=0x3 \ -device pci-bridge,chassis_nr=29,id=pci.29,bus=pci.1,addr=0x4 \ -device pci-bridge,chassis_nr=30,id=pci.30,bus=pci.1,addr=0x5 \ -device pci-bridge,chassis_nr=31,id=pci.31,bus=pci.1,addr=0x6 \ -device pci-bridge,chassis_nr=32,id=pci.32,bus=pci.1,addr=0x7 \ -device pci-bridge,chassis_nr=33,id=pci.33,bus=pci.1,addr=0x8 \ -device pci-bridge,chassis_nr=34,id=pci.34,bus=pci.1,addr=0x9 \ -device pci-bridge,chassis_nr=35,id=pci.35,bus=pci.1,addr=0xa \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 \ -netdev tap,fd=32,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:e4:b2:06,bus=pci.1,addr=0x15 \ -netdev tap,fd=33,id=hostnet1 \ -device virtio-net,netdev=hostnet1,id=net1,mac=52:54:00:d0:c9:ca,bus=pci.1,addr=0x14 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device 'vfio-pci,host=0000:2d:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x6,romfile=/mnt/user/appdata/TU106 kvm.rom' \ -device vfio-pci,host=0000:2d:00.1,id=hostdev1,bus=pci.0,addr=0x6.0x1 \ -device vfio-pci,host=0000:2d:00.2,id=hostdev2,bus=pci.0,addr=0x6.0x2 \ -device vfio-pci,host=0000:2d:00.3,id=hostdev3,bus=pci.0,addr=0x6.0x3 \ -device vfio-pci,host=0000:01:00.0,id=hostdev4,bootindex=2,bus=pci.2,addr=0x1 \ -device vfio-pci,host=0000:23:00.0,id=hostdev5,bootindex=1,bus=pci.35,addr=0x1 \ -device vfio-pci,host=0000:2f:00.3,id=hostdev6,bus=pci.1,multifunction=on,addr=0x10 \ -device vfio-pci,host=0000:2f:00.4,id=hostdev7,bus=pci.1,addr=0x10.0x1 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on
-
I notice when adding a new usb on a running vm with already usb passed thourgh the script disconect all of them and then reconect.
so there are a temporal disconection on already passedtrhough usbs
-
interesting, but this is only for devices in the list(cfg file) right?
so if i'm going to plug new devices I need to update the list?
another thing I notice is when I unplug the usb cable sometimes the VM got freeze if I first don't unplug via script
-
sounds very interesting but not sure if this is what i want.
i want to ensure that all the usb's connected to the host are passed through to the running VM that have specific GPU passed through.I of course want to blacklist some devices like the Unraid OS USB drive.
If i shutdown the VM and I run another one that have the gpu passed thourgh the script should mount all the usb's to the new VM.
is that possible?
-
playing with ZFS RC2 seems working fine, how can i configure persistent L2ARC?
-
maybe we can wrap hdparm to make it compatible
ZFS plugin for unRAID
in Plugin Support
Posted
Well the idea is to have it there 24/7 running to expand my array, some will be disks on unraid array (so i supose no problem as disks works independently, but i also plan to create another ZFS raid there(3disks).