segator

Members
  • Content Count

    159
  • Joined

  • Last visited

Everything posted by segator

  1. is still spaming on my log Oct 3 12:11:48 segator-unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin install https://raw.githubusercontent.com/doron1/unraid-sas-spindown/master/sas-spindown.plg Oct 3 12:11:48 segator-unraid root: plugin: running: anonymous Oct 3 12:11:48 segator-unraid root: plugin: creating: /usr/local/emhttp/plugins/sas-spindown/README.md - from INLINE content Oct 3 12:11:48 segator-unraid root: plugin: setting: /usr/local/emhttp/plugins/sas-spindown/README.md - mode to 644 Oct 3 12:11:48 segator-unraid root: plugin: creating: /u
  2. is this a bug? core usage diferent in UI than htop command. I'm using ZFS plugin btw(maybe zfs is not tracked in htop but in the UI?) in the moment of take the snapshot i were doing checksuming of files.
  3. just tried and I see 2 things running on ryzen 3900x unraid 6.9 beta29 using host-model it shows this error error: internal error: qemu unexpectedly closed the monitor: 2020-10-02T13:27:56.354809Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.nrip-save [bit 3] 2020-10-02T13:27:56.355212Z qemu-system-x86_64: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.npt [bit 0] 2020-10-02T13:27:56.355217Z qemu-system-x86_64: warning: This feature depends on other features t
  4. I don't know if maybe this could be your error: RTX 2XXX also have a usb device, maybe is the same with RTX 3XXX if this is the case ensure to add it, also remember to use the "multifunction" there is a video from spaceinvader one that explains how to passthrough correctly RTX 2xxx.
  5. I partially fixed it adding this on the virsh file <vcpusched vcpus='0-11' scheduler='fifo' priority='99'/> now i can run high load on host that the VM is not affected 200 points drop on cinebench r20 in VM when running full cpu stress test on host but now i notice I have the sttuters only when high network usage, if for example I run iperf3 between host and guest I got 1.27gbps when I should got 10gbps and the VM is completely unusable meanwhile the test is running I tried to isolate the networks, first one for internet conneciton and second one for guest
  6. Same here, i running latest beta29, if i need to test something let me know. i also have NVME on host
  7. lot of sututtering when exist soft load on host, I tried change vcore threads priority to -20/-10,pinning, no pinning.. the only think works is cpu isolation but its not cool as when i'm shutting down the VM i want this cores availables for other applications(like ffmpeg video encoding on the night) i'm trying to play with chrt to change the scheduler priority without sucess root@Tower:~# pidof qemu-system-x86_64 15635 root@Tower:~# chrt -f -p 1 15635 chrt: failed to set pid 15635's policy: Operation not permitted any idea, suggestion? ryzen 3900x MSI tomah
  8. I notice when adding a new usb on a running vm with already usb passed thourgh the script disconect all of them and then reconect. so there are a temporal disconection on already passedtrhough usbs
  9. interesting, but this is only for devices in the list(cfg file) right? so if i'm going to plug new devices I need to update the list? another thing I notice is when I unplug the usb cable sometimes the VM got freeze if I first don't unplug via script
  10. sounds very interesting but not sure if this is what i want. i want to ensure that all the usb's connected to the host are passed through to the running VM that have specific GPU passed through. I of course want to blacklist some devices like the Unraid OS USB drive. If i shutdown the VM and I run another one that have the gpu passed thourgh the script should mount all the usb's to the new VM. is that possible?
  11. wondering if this release fix qemu 5 issue and ryzen "host-passthrough bug"
  12. playing with ZFS RC2 seems working fine, how can i configure persistent L2ARC?
  13. maybe we can wrap hdparm to make it compatible
  14. your patch also fixes the green ball in the UI? BTW, did you guys opened emhttpd binary? there we can see al the commands that are running under the hood. some things I found /usr/sbin/hdparm -S0 /dev/%s &> /dev/null /usr/sbin/hdparm -y /dev/%s &> /dev/null /usr/sbin/smartctl -n standby %s %s -AH /dev/%s I really need this working :D, owner of 2 SAS 4kn disks
  15. HI @trurl thanks for your help, I'm running 6.8.2 on 2 diferents machines, I also testing 6.9-beta25 I tried to run clean installation in a VM to test this and always is happening on all the machines. please try this to check that its not working on default unraid installation. docker run -itd --name busybox01 -h busybox01 busybox docker run -itd --name busybox02 -h busybox02 busybox docker exec -it busybox01 ping -c 2 busybox02 if we use default unraid bridge same.. docker run -itd --network=bridge --name busybox01 -h busybox01 busybox docker run -itd --ne
  16. it's me or docker dns doesn't work? if I have 2 containers and i want to have access to one of the containers from the other using the docker name, it's not working. neither from the host, is like docker DNS is not enabled... tried in a clean installation and happens.. so is this intended?... any way to enable it? Thanks!
  17. I'm trying to find out documentation about unraid plugin creation, but i can not find too many things. I have clear idea how to create the page and package the plugin but for exemple if I want to deploy a service that boot just after array start?
  18. this defintelly will need a plugin.. it will be so nice a plugin with a UI with all the snapshots stuff and cronjob configurable. also this plugin could enable shadow copy smb feature
  19. Hey I have ryzen 3900x adding this qemu command line, it seems i can not able this "-amd-stibp" say my cpu doesnt have this feature. anyway seems works with host-model, but hyperV doesn't work then I need it to have WSL enabled and windows sandbox.
  20. Im on same version as you and gpu passthrough works (of course with the host-model fix, if not then bsod..)
  21. I modified the script for my needs, set VM nice at -20, nvidia-persistance mode on stop. it will be nice if we have something like this https://passthroughpo.st/simple-per-vm-libvirt-hooks-with-the-vfio-tools-hook-helper/ to unraid so we don't need to modify limetech scripts...
  22. Hi, i just deployed a gaming VM as i wanted to merge my NAS and my desktop computer using unraid. When i doing some ffmpeg video encodings on the host running on docker even with nice set as lowest priority the vm get completely unusable, also tried put VM with highest nice using renice. I don't want to pin cpu as some times i will need the performance on the vm and some times on the host. so for me best solution is to use the nice but seems not working. If i run the ffmpeg on the VM all works fine and i can use the vm fluent even is fully loaded because t
  23. the release comes with docker or just a binary. Wireguards runs on the kernel, the docker or my app only send the commands to the kernel. so the performance is native as unraid plugin. - Looks like node addition is unrestricted. to be able to add a node you need the clusterID, but the security of course should be improved. - How trustful is this free service kvdb.io? you are right, maybe we should upload data encripted then problem solved. - How is key management handled between peers? public keys are uploaded to the configuration manager and shared with rest of node
  24. I'm not a fan of GUI's I undertand for lot of people can not be interesting, i'm not trying to say that this implementation is better than the current one, is a diferent alternative my app provides: - Automatic configuration on all the nodes of the cluster (new/update/remove nodes) - Support for dynamic IP: it update the endpoint of the node that the public IP changed an the rest of the nodes.
  25. Hey @bonienl you are completly right but first I want to ensure what I wants works then I will try to help admin's to add those modules. What I'm trying to achieve is install kubernetes on unraid, but I have some problems with ipset and some networking modules.