therapist

Members
  • Posts

    85
  • Joined

  • Last visited

Everything posted by therapist

  1. so i have a VM on this unraid box that is on a vlan (192.168.40.253) it connects to unraid SMB at 192.168.40.2 file transfer rates are close to reported bandwidth there are no wires, just the virtual 10gbe adapter and unraid i have reset the VM network settings to default, adjusted max rss to 6 to match cores on VM unraid has all default network settings except for buffers through tips & tweaks plugin...which shouldnt matter for a VM using br0.x, no? I get better than gigabit, but something isnt right since im not seeing anywhere near proper speeds
  2. this is indicating nothing better than gigabit... all links are indicating 10gb any idea where to look into what i am doing wrong?
  3. I have been doing my testing with both I have a EVO 860 SSD as my main cache disk...so current performance saturates that read/write I have an intel NVME disk installed for VM VHDs and can test full 10gbe bandwidth through that Copying a testfile from RAID1 "cache-protected" to nvme gets speeds I would expect from RAID1 SAS SSD --> NVME Benchmarking the VHD for the VM (which is on the nvme) gets results I expect iperf shows bandwidth is there But when I transfer to DISK share using SMB the speed doesnt translate I thought that it was a networking issue At first because originally my VM was coming out through br0 on unraid. I was able to pass through one of the 10ge ports from my motherboard directly to the VM and improved speed but not to where I think it should be
  4. I have hit a bit of a speed limit on my setup iperf3 tests indicate 8+gb/s of available bandwidth between unraid and my test PC, but I cant transfer over SMB faster than 490-520 mb/s unraid v6.11.5 on EPYCD8-2T w/ 7551 eth0 10gb rj45 --> sfp+ @ mikrotik CRS305-1G-4S+ test share/disk = INTEL SSDPF2KX038TZ test pc = unraid VM w/ mobo eth1 passed through 24c 24gb memory disk = Seagate FireCuda 530 diagnostics attached crunch-diagnostics-20240129-1604.zip
  5. I am working to improve SMB performance and have found a collection of settings that work well. I would like to deploy to all interfaces, but I get strange delays / drops when applied as written: server multi channel support = yes aio read size = 1 aio write size = 1 interfaces = "192.168.1.248;capability=RSS,speed=10000000000" "192.168.40.2;capability=RSS,speed=10000000000" with both interface characteristics specified the main vm on 192.168.1.x hangs on file transfer and transfers dreadfully slow if i remove the 2nd interface (192.168.40.2) main vm on 192.168.1.x works OK, but VM on that interface does not exceed ~175mb/s transfer are the interfaces listed correctly?
  6. Is there a way to update FFMPEG inside this container? I am getting errors whenever a DVR'd program is converted to .mp4 through a script The script has been in use for a long time (years) and is only recently showing an error. The conversion completes and the video files are seemingly unaffected...but Id like to prevent error if possible. Any advice otherwise? transcode_internal.sh
  7. adding diags crunch-diagnostics-20240109-1609.zip
  8. I have finally taken the plunge to bring 10gbe into my network after many years of gigabit and multi-gigabit LAGG. All hardware is capable, but I am having issues with bandwidth. BASIC HARDWARE SPECS: UNRAID 6.11.5 - EPYCD8-2T w/ 7551 w/ built in x550 PFSENSE 2.7.0 - DELL 7010 i7-2600k + mellanox connectx core switch - ubiquiti usw-pro-24-poe 10gbe addon switch - mikrotik CRS305-1G-4S+ DAC cable from pfsense to port 1 on ubiquiti fiber between ubiquiti and mikrotik cat7 cables w/ spf+/rj45 adapters to mikrotik all connections report 10gb connectivity My main PC is a VM on unraid w/ latest virtio drivers -- I passed through one of the 10gbe ports from my EPYCD8-2T to the VM (192.168.1.192) VPN PC is a VM on unraid w/ updated (but not latest drivers-2021) running w virtio network driver on bridged vlan (192.168.40.253 @ br0.40) PINGBOX is a vm on unraid w/ old drivers (2017) running w virti network driver on bridged main lan (192.168.1.10 @ br0) When running iperf tests I am mostly not getting the 10gbe bandwidth I think should be there... from main pc to iperf docker @ unraid from main pc to VPN pc on other subnet from main PC to PINGBOX from main pc to iperf @ pfsense from PINGBOX VM to unraid iperf docker I feel like somehow its not routing properly, but I have no idea why. I understand some bandwidth loss between vlans, but no way it should be that much (only pfblocker, no IDS or suricata). At the very least I should see much closer to line speed on same subnet. For fun I set up L3 on ubiquiti and moved a speedtest docker to test. Theoretically the traffic should never route out of the switch, and i am not seeing anywhere near the speeds I should have Am I missing something with 10gbe? All MTUs are 1500 but I should see faster even with that, no? Any assistance would be hugely appreciated
  9. I am running unraid on an EPYCD8-2T with dual INTEL x550 10gbe ports Up until recently I had a full 1gb network and both ports were bonded, but am finally upgrading my router / unraid box to 10gbe operation. Unraid uses port 0, and I would like to pass port 1 to my Windows 10 VM -- I am having issues with achieving full 10gb thruput with a single interface. Can I pass the device by adding to XML or do I have to stub first?
  10. 'memcache.local' => '\\OC\\Memcache\\APCu', 'memcache.locking' => '\\OC\\Memcache\\APCu',
  11. Getting an error w/ nvme and ssd: I am positive that there are partitions on the disk
  12. thank you -- after running i learned that i have a "legacy" cert and can no longer utilize SSL going forward on 6.9.2
  13. Starting yesterday, I am no longer able to connect to my server UI by the regular https link. I normally type in "https://crunch/" and it is relayed to unraid.net for SSL/TLS. This has been working fine for years at this point, but yesterday I know there were some backend issues that I think are still bleeding forward to today. All services are currently operational, and I can access the webUI by typing in https://IP_ADDRESS and dismissing the warnings. I have tried refreshing the cert and it says that my IP is updated with unraid.net, but on my end...no joy. I have also cleared local DNS cache, restarted LAN PCs, everything short of a full system restart (which shouldnt really be necessary....) Any advice would be appreciated
  14. The screenshot from 3dmark is actually from a Q35 version I created to test exactly that. The Q35 template was bare bones w/ nothing but device pass thrus -- same results Id be okay with the performance -- maybe I am CPU limited now? But that big drop / stutter happens (see the graph) and it is jarring
  15. I run a W10 gaming VM on my unraid 6.9.2 server which is built as follows: AsrockRack EPYCDT-2t w/ 7571 and 128gb DDR4 2133mhz ECC Geforce 1050 Geforce 3070ti 9305-8i w/ SAS3 expander Various NVME / UD disks VM xml is below: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>GEIST v2</name> <uuid>3b6ceeeb-ee73-fb44-2964-80e165976161</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>24</vcpu> <cputune> <vcpupin vcpu='0' cpuset='26'/> <vcpupin vcpu='1' cpuset='58'/> <vcpupin vcpu='2' cpuset='27'/> <vcpupin vcpu='3' cpuset='59'/> <vcpupin vcpu='4' cpuset='28'/> <vcpupin vcpu='5' cpuset='60'/> <vcpupin vcpu='6' cpuset='29'/> <vcpupin vcpu='7' cpuset='61'/> <vcpupin vcpu='8' cpuset='30'/> <vcpupin vcpu='9' cpuset='62'/> <vcpupin vcpu='10' cpuset='31'/> <vcpupin vcpu='11' cpuset='63'/> <vcpupin vcpu='12' cpuset='14'/> <vcpupin vcpu='13' cpuset='46'/> <vcpupin vcpu='14' cpuset='15'/> <vcpupin vcpu='15' cpuset='47'/> <vcpupin vcpu='16' cpuset='10'/> <vcpupin vcpu='17' cpuset='42'/> <vcpupin vcpu='18' cpuset='11'/> <vcpupin vcpu='19' cpuset='43'/> <vcpupin vcpu='20' cpuset='12'/> <vcpupin vcpu='21' cpuset='44'/> <vcpupin vcpu='22' cpuset='13'/> <vcpupin vcpu='23' cpuset='45'/> <emulatorpin cpuset='1,33'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/3b6ceeeb-ee73-fb44-2964-80e165976161_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='1234567890ab'/> <frequencies state='on'/> </hyperv> <kvm> <hidden state='on'/> </kvm> <vmport state='off'/> <ioapic driver='kvm'/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='2' cores='6' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> </cpu> <clock offset='localtime'> <timer name='rtc' present='no' tickpolicy='catchup'/> <timer name='pit' present='no' tickpolicy='delay'/> <timer name='hpet' present='no'/> <timer name='hypervclock' present='yes'/> <timer name='tsc' present='yes' mode='native'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <source dev='/dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_S3PTNF0JB57631J'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x02' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback' discard='unmap'/> <source file='/mnt/cnvme/VMS/VHD_BULK/GEIST/GEIST_Z.img'/> <target dev='hdd' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x01' function='0x0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <source dev='/dev/nvme1n1'/> <backingStore/> <target dev='hde' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x04' function='0x0'/> </disk> <controller type='pci' index='0' model='pci-root'/> <controller type='pci' index='1' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:f5:93:8b'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x0'/> </source> <rom file='/mnt/cache_protected/VMS/vbios/GeForce_RTX_3070_Ti.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'/> </domain> Ive only had the 3070ti in the system for about 2 months -- prior to this I had a 1070 which performed admirably but at its limits. I have felt that the 3070ti has been BETTER, but "underwhelming" since its installation, and recently I have been suffering from stutters in game and during benchmaks...this had never happened w/ the 1070. I have all the usual XML tweaks -- MSI has been applied The 3dMark benchmarks w/ the 1070 were 5500-5800 and with the 3070ti we're 9500-10500 which is definitely improved but is not consistent. Any advice?
  16. Ive recently upgraded my server with a few NVME drives, one of which replaced a VHD for a VM. I have been passing whole disks through to VMs for quite some time like this: <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <source dev='/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_XXXXXXXXXX'/> <target dev='hdf' bus='scsi'/> <boot order='2'/> <alias name='scsi0-0-0-3'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> After doing some reading I learned that NVME drives can be passed through directly to show as an actual NVME drive (as opposed to thin provisioned) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </hostdev> Is there any actual functional difference in passing devices one or the other way? Any performance benefit? ALSO I learned that VHDs can be passed through as NVME devices <qemu:commandline> <qemu:arg value='-drive'/> <qemu:arg value='file=/mnt/cnvme/VMS/VHD_BULK/GEIST/GEIST_Z.img,format=raw,if=none,id=NVME1'/> <qemu:arg value='-device'/> <qemu:arg value='nvme,drive=NVME1,serial=nvme-1'/> </qemu:commandline>
  17. all was back to normalcy until 2022.02.24 operations all working, but getting a lot of php errors in log:
  18. Having an issue with the latest few releases where my device activity is not showing on any of the "passed through" devices. Only the mountpoint disk for my raid 10 array is showing activity but there are definitely reads and writes being registered just not disk speed
  19. Devices are all now in UD but first device is blank When changing name it reverts to blank
  20. the name field is blank, yes there arent any duplicate entries -- the drives still in historical include 3/6 drives in a RAID10 array and a passed-thru VM boot disk. all settings have name field blank
  21. Having an issue this morning w/ version 2022.01.19a on unRAID 6.9.1 Upon upgrade ALL of my unassigned devices went into the Historical Devices section w/ pool/drive function seemingly unaffected. I stopped array as per previous recommendation -- items remained historical. I rebooted server (which greatly pained me after several months of uptime) and ONE device came back to UD w/ no device tag. I went into the settings for each drive and clicked done and some of the drives have returned to UD. See screenshot: That is currently where I stand -- I tried changing the device ID for one of the returned drives and it always reverts to dev (dev9) and there is still a device that has blank ID. Other drives are physically still in system and are functioning (EVO_850_1 is a VM boot disk and I am typing this message through that machine). Please advise
  22. its not just you the easiest solution is to install a "custom" kernel with updated drivers -- super easy
  23. can the stable ver 6.8.3 get the latest nvidia drivers? PLEX betas (and likely next stable) now require 450.66 or later drivers to keep nvenc operational -- hw transcoding is broken on latest versions because of this. i prefer to not run betas, so essentially i am stuck on a previous working plex version (1.20.1.3252-1-01) https://www.reddit.com/r/PleX/comments/j0bzu1/it_was_working_before_but_now_im_having_issues/ https://forums.plex.tv/t/help-fixing-my-broken-nvidia-nvenc-driver-config-ubuntu-20-04/637854 https://forums.plex.tv/t/nvenc-hardware-encoding-broken-on-1-20-2-3370/637925
  24. @b3rs3rk I took the plunge & upgraded to 6.8.3 -- all is well Thanks