segator

Members
  • Posts

    165
  • Joined

  • Last visited

Everything posted by segator

  1. total used free shared buff/cache available Mem: 47 42 2 1 2 2 Swap: 0 0 0
  2. time read miss miss% dmis dm% pmis pm% mmis mm% size c avail 11:24:21 0 0 0 0 0 0 0 0 0 12G 12G 3.3G
  3. Hey using from a couple of weeks ZFS on unraid with gaming VM as primary desktop pc (nas + desktop pc all in one), it works fine but sometimes unraid decides to kill my VM because "out of memory" i assigned 16gb of ram to the VM and the host have 64gb of ram, I think zfs is not cleaning enough fast the arc memory when other containers reclaim memory and then kernel decides to kill my VM i can fix it using hugepages but i don't like it because then its memory that ZFS can not use it when the VM is shutted down (that at the end is the 90% of the time), I tried to limit ZFS arc with echo 12884901888 >> /sys/module/zfs/parameters/zfs_arc_max but same
  4. nope, I still have same issue but the machine is more or less usable if there are not extreme load on the virtual ethernet device.
  5. running stable beta29 with desktop main pc Gaming Windows VM, host-passthrough issue with ryzen is fixed but anyway i still using host-model as I notice more performance (or at least is what i see with cinebench and aida64)
  6. Ops I forgot to mention this error d: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5 appears always that unraid trying to stop the SAS disks, thats why i'm saying it's not working as it stays on loop forever until log is full in a couple of weeks
  7. is still spaming on my log Oct 3 12:11:48 segator-unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin install https://raw.githubusercontent.com/doron1/unraid-sas-spindown/master/sas-spindown.plg Oct 3 12:11:48 segator-unraid root: plugin: running: anonymous Oct 3 12:11:48 segator-unraid root: plugin: creating: /usr/local/emhttp/plugins/sas-spindown/README.md - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /usr/local/emhttp/plugins/sas-spindown/README.md - mode to 644 Oct 3 12:11:48 segator-unraid root: plugin: creating: /usr/local/bin/unraid-spinsasdown - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /usr/local/bin/unraid-spinsasdown - mode to 755 Oct 3 12:11:48 segator-unraid root: plugin: creating: /usr/sbin/smartctl.wrapper - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /usr/sbin/smartctl.wrapper - mode to 755 Oct 3 12:11:48 segator-unraid root: plugin: creating: /etc/rsyslog.d/99-spinsasdown.conf - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /etc/rsyslog.d/99-spinsasdown.conf - mode to 644 Oct 3 12:11:48 segator-unraid root: plugin: running: anonymous Oct 3 12:11:48 segator-unraid sas-spindown plugin: Installing a wrapper for smartctl... Oct 3 12:11:48 segator-unraid sas-spindown plugin: Installing syslog hook for SAS device spindown... Oct 3 12:11:49 segator-unraid kernel: mdcmd (1518646): spindown 4 Oct 3 12:11:49 segator-unraid kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5 Oct 3 12:11:49 segator-unraid kernel: mdcmd (1518647): spindown 7 Oct 3 12:11:49 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:11:49 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:11:49 segator-unraid kernel: md: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5 Oct 3 12:11:50 segator-unraid rsyslogd: [origin software="rsyslogd" swVersion="8.1908.0" x-pid="27295" x-info="https://www.rsyslog.com"] start Oct 3 12:11:51 segator-unraid kernel: mdcmd (1518648): spindown 4 Oct 3 12:11:52 segator-unraid SAS Assist v0.5[27387]: spinning down slot 4, device /dev/sdd (/dev/sg3) Oct 3 12:11:54 segator-unraid SAS Assist v0.5[27435]: spinning down slot 7, device /dev/sdg (/dev/sg6) Oct 3 12:11:56 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:11:56 segator-unraid kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5 Oct 3 12:11:56 segator-unraid kernel: mdcmd (1518649): spindown 7 Oct 3 12:11:58 segator-unraid SAS Assist v0.5[27517]: spinning down slot 4, device /dev/sdd (/dev/sg3) Oct 3 12:12:02 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:12:02 segator-unraid kernel: md: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5 Oct 3 12:12:03 segator-unraid kernel: mdcmd (1518650): spindown 4 Oct 3 12:12:04 segator-unraid SAS Assist v0.5[27826]: spinning down slot 7, device /dev/sdg (/dev/sg6) Oct 3 12:12:06 segator-unraid emhttpd: error: mdcmd, 2721: Input/output error (5): write Oct 3 12:12:06 segator-unraid kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5 Oct 3 12:12:06 segator-unraid kernel: mdcmd (1518651): spindown 7 Oct 3 12:12:08 segator-unraid SAS Assist v0.5[27904]: spinning down slot 4, device /dev/sdd (/dev/sg3)
  8. is this a bug? core usage diferent in UI than htop command. I'm using ZFS plugin btw(maybe zfs is not tracked in htop but in the UI?) in the moment of take the snapshot i were doing checksuming of files.
  9. just tried and I see 2 things running on ryzen 3900x unraid 6.9 beta29 using host-model it shows this error error: internal error: qemu unexpectedly closed the monitor: 2020-10-02T13:27:56.354809Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.nrip-save [bit 3] 2020-10-02T13:27:56.355212Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.npt [bit 0] 2020-10-02T13:27:56.355217Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.nrip-save [bit 3] and using host-passtrhough i see this other error error: Failed to start domain IsaacGaming error: internal error: process exited while connecting to monitor: 2020-10-02T13:21:20.955765Z qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,queue-size=1024,tag=user,bus=pci.1,addr=0x19: Failed to write msg. Wrote -1 instead of 12. 2020-10-02T13:21:20.955793Z qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,queue-size=1024,tag=user,bus=pci.1,addr=0x19: vhost_dev_init failed: Operation not permitted I supose i need to modify some permissions but not sure which of them
  10. I don't know if maybe this could be your error: RTX 2XXX also have a usb device, maybe is the same with RTX 3XXX if this is the case ensure to add it, also remember to use the "multifunction" there is a video from spaceinvader one that explains how to passthrough correctly RTX 2xxx.
  11. I partially fixed it adding this on the virsh file <vcpusched vcpus='0-11' scheduler='fifo' priority='99'/> now i can run high load on host that the VM is not affected 200 points drop on cinebench r20 in VM when running full cpu stress test on host but now i notice I have the sttuters only when high network usage, if for example I run iperf3 between host and guest I got 1.27gbps when I should got 10gbps and the VM is completely unusable meanwhile the test is running I tried to isolate the networks, first one for internet conneciton and second one for guest-host only network(NFS, smb.. ) Any idea how to fix the stutters? <interface type='bridge'> <mac address='52:54:00:e4:b2:06'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x15' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:d0:c9:ca'/> <source bridge='virbr0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x14' function='0x0'/> </interface>
  12. Same here, i running latest beta29, if i need to test something let me know. i also have NVME on host
  13. lot of sututtering when exist soft load on host, I tried change vcore threads priority to -20/-10,pinning, no pinning.. the only think works is cpu isolation but its not cool as when i'm shutting down the VM i want this cores availables for other applications(like ffmpeg video encoding on the night) i'm trying to play with chrt to change the scheduler priority without sucess root@Tower:~# pidof qemu-system-x86_64 15635 root@Tower:~# chrt -f -p 1 15635 chrt: failed to set pid 15635's policy: Operation not permitted any idea, suggestion? ryzen 3900x MSI tomahawk x570 wifi 64gb ram ddr4 3200 cl16 I readed a lot and played diferent things and unraid versions and always the same currently running 6.9.0 beta29 with cpu pinning passing thourgh onboard audio device, usb controller and a nvidia RTX 2070 qemu command LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming \ XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming/.local/share \ XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming/.cache \ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-IsaacGaming/.config \ QEMU_AUDIO_DRV=none \ /usr/local/sbin/qemu \ -name guest=IsaacGaming,debug-threads=on \ -S \ -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-IsaacGaming/master-key.aes \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/9239375a-31b7-de02-9205-3209712e6715_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-i440fx-5.1,accel=kvm,usb=off,vmport=off,dump-guest-core=off,mem-merge=off,kernel_irqchip=on,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -cpu host,migratable=on,hypervisor=on,topoext=on,svm=on,invtsc=on,kvmclock=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-vendor-id=1234567890ab,hv-frequencies,kvm=off,host-cache-info=on,l3-cache=off \ -m 16384 \ -overcommit mem-lock=off \ -smp 1,maxcpus=16,sockets=1,dies=1,cores=8,threads=2 \ -object iothread,id=iothread1 \ -object iothread,id=iothread2 \ -uuid ed8709ad-6ea8-ca1c-a980-2cf05d72f688 \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=30,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x4 \ -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x5 \ -device pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0x8 \ -device pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0x9 \ -device pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xa \ -device pci-bridge,chassis_nr=6,id=pci.6,bus=pci.0,addr=0xb \ -device pci-bridge,chassis_nr=7,id=pci.7,bus=pci.0,addr=0xc \ -device pci-bridge,chassis_nr=8,id=pci.8,bus=pci.0,addr=0xd \ -device pci-bridge,chassis_nr=9,id=pci.9,bus=pci.0,addr=0xe \ -device pci-bridge,chassis_nr=10,id=pci.10,bus=pci.0,addr=0xf \ -device pci-bridge,chassis_nr=11,id=pci.11,bus=pci.0,addr=0x10 \ -device pci-bridge,chassis_nr=12,id=pci.12,bus=pci.0,addr=0x11 \ -device pci-bridge,chassis_nr=13,id=pci.13,bus=pci.0,addr=0x12 \ -device pci-bridge,chassis_nr=14,id=pci.14,bus=pci.0,addr=0x13 \ -device pci-bridge,chassis_nr=15,id=pci.15,bus=pci.0,addr=0x14 \ -device pci-bridge,chassis_nr=16,id=pci.16,bus=pci.0,addr=0x15 \ -device pci-bridge,chassis_nr=17,id=pci.17,bus=pci.0,addr=0x16 \ -device pci-bridge,chassis_nr=18,id=pci.18,bus=pci.0,addr=0x17 \ -device pci-bridge,chassis_nr=19,id=pci.19,bus=pci.0,addr=0x18 \ -device pci-bridge,chassis_nr=20,id=pci.20,bus=pci.0,addr=0x19 \ -device pci-bridge,chassis_nr=21,id=pci.21,bus=pci.0,addr=0x1a \ -device pci-bridge,chassis_nr=22,id=pci.22,bus=pci.0,addr=0x1b \ -device pci-bridge,chassis_nr=23,id=pci.23,bus=pci.0,addr=0x1c \ -device pci-bridge,chassis_nr=24,id=pci.24,bus=pci.0,addr=0x1d \ -device pci-bridge,chassis_nr=25,id=pci.25,bus=pci.0,addr=0x1e \ -device pci-bridge,chassis_nr=26,id=pci.26,bus=pci.0,addr=0x1f \ -device pci-bridge,chassis_nr=27,id=pci.27,bus=pci.1,addr=0x2 \ -device pci-bridge,chassis_nr=28,id=pci.28,bus=pci.1,addr=0x3 \ -device pci-bridge,chassis_nr=29,id=pci.29,bus=pci.1,addr=0x4 \ -device pci-bridge,chassis_nr=30,id=pci.30,bus=pci.1,addr=0x5 \ -device pci-bridge,chassis_nr=31,id=pci.31,bus=pci.1,addr=0x6 \ -device pci-bridge,chassis_nr=32,id=pci.32,bus=pci.1,addr=0x7 \ -device pci-bridge,chassis_nr=33,id=pci.33,bus=pci.1,addr=0x8 \ -device pci-bridge,chassis_nr=34,id=pci.34,bus=pci.1,addr=0x9 \ -device pci-bridge,chassis_nr=35,id=pci.35,bus=pci.1,addr=0xa \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 \ -netdev tap,fd=32,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:e4:b2:06,bus=pci.1,addr=0x15 \ -netdev tap,fd=33,id=hostnet1 \ -device virtio-net,netdev=hostnet1,id=net1,mac=52:54:00:d0:c9:ca,bus=pci.1,addr=0x14 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device 'vfio-pci,host=0000:2d:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x6,romfile=/mnt/user/appdata/TU106 kvm.rom' \ -device vfio-pci,host=0000:2d:00.1,id=hostdev1,bus=pci.0,addr=0x6.0x1 \ -device vfio-pci,host=0000:2d:00.2,id=hostdev2,bus=pci.0,addr=0x6.0x2 \ -device vfio-pci,host=0000:2d:00.3,id=hostdev3,bus=pci.0,addr=0x6.0x3 \ -device vfio-pci,host=0000:01:00.0,id=hostdev4,bootindex=2,bus=pci.2,addr=0x1 \ -device vfio-pci,host=0000:23:00.0,id=hostdev5,bootindex=1,bus=pci.35,addr=0x1 \ -device vfio-pci,host=0000:2f:00.3,id=hostdev6,bus=pci.1,multifunction=on,addr=0x10 \ -device vfio-pci,host=0000:2f:00.4,id=hostdev7,bus=pci.1,addr=0x10.0x1 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on
  14. I notice when adding a new usb on a running vm with already usb passed thourgh the script disconect all of them and then reconect. so there are a temporal disconection on already passedtrhough usbs
  15. interesting, but this is only for devices in the list(cfg file) right? so if i'm going to plug new devices I need to update the list? another thing I notice is when I unplug the usb cable sometimes the VM got freeze if I first don't unplug via script
  16. sounds very interesting but not sure if this is what i want. i want to ensure that all the usb's connected to the host are passed through to the running VM that have specific GPU passed through. I of course want to blacklist some devices like the Unraid OS USB drive. If i shutdown the VM and I run another one that have the gpu passed thourgh the script should mount all the usb's to the new VM. is that possible?
  17. wondering if this release fix qemu 5 issue and ryzen "host-passthrough bug"
  18. playing with ZFS RC2 seems working fine, how can i configure persistent L2ARC?
  19. maybe we can wrap hdparm to make it compatible
  20. your patch also fixes the green ball in the UI? BTW, did you guys opened emhttpd binary? there we can see al the commands that are running under the hood. some things I found /usr/sbin/hdparm -S0 /dev/%s &> /dev/null /usr/sbin/hdparm -y /dev/%s &> /dev/null /usr/sbin/smartctl -n standby %s %s -AH /dev/%s I really need this working :D, owner of 2 SAS 4kn disks
  21. HI @trurl thanks for your help, I'm running 6.8.2 on 2 diferents machines, I also testing 6.9-beta25 I tried to run clean installation in a VM to test this and always is happening on all the machines. please try this to check that its not working on default unraid installation. docker run -itd --name busybox01 -h busybox01 busybox docker run -itd --name busybox02 -h busybox02 busybox docker exec -it busybox01 ping -c 2 busybox02 if we use default unraid bridge same.. docker run -itd --network=bridge --name busybox01 -h busybox01 busybox docker run -itd --network=bridge --name busybox02 -h busybox02 busybox docker exec -it busybox01 ping -c 2 busybox02 but if we use created one by us... docker network create -d bridge mybridge01 docker run -itd --network=mybridge01 --name busybox01 -h busybox01 busybox docker run -itd --network=mybridge01 --name busybox02 -h busybox02 busybox docker exec -it busybox01 ping -c 2 busybox02 then it works
  22. it's me or docker dns doesn't work? if I have 2 containers and i want to have access to one of the containers from the other using the docker name, it's not working. neither from the host, is like docker DNS is not enabled... tried in a clean installation and happens.. so is this intended?... any way to enable it? Thanks!
  23. I'm trying to find out documentation about unraid plugin creation, but i can not find too many things. I have clear idea how to create the page and package the plugin but for exemple if I want to deploy a service that boot just after array start?
  24. this defintelly will need a plugin.. it will be so nice a plugin with a UI with all the snapshots stuff and cronjob configurable. also this plugin could enable shadow copy smb feature
  25. Hey I have ryzen 3900x adding this qemu command line, it seems i can not able this "-amd-stibp" say my cpu doesnt have this feature. anyway seems works with host-model, but hyperV doesn't work then I need it to have WSL enabled and windows sandbox.