• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

kakashisensei's Achievements


Newbie (1/14)



  1. I am having an issue where if my unraid server improperly shuts off or resets, the qbittorrent settings revert back to default. Not the docker config, but the in application settings. I've set the qbittorrent.conf file to read only by all users/grps, but it still reverts to default. Anyone experience this and found a solution? Thanks.
  2. I have been using transmission vpn docker for quite a while, but it frequently had issues after updating docker, and finally stopped working correctly and am unable to fix. Tried all the binhex torrent client dockers and this one is the best imo. The others had various issues. The one issue for me for this one is that the move operation from incomplete to complete folder seems wasteful for ssds. I have my incomplete folder on a ssd cache only share. I dont want active torrent downloads keeping my hdds spinning. When complete, they move to the complete folder, which is also on ssd cache. The daily mover job will move complete downloads to hdd array. With qbittorrent, the move job from incomplete to complete seems to copy the entire contents, instead of just changing the pointer of the file location. This basically will consume twice the write endurance of an ssd if the move job is on the same ssd. Has anyone noticed this and figured out a solution? Thanks.
  3. I've been getting this warning in syslog every minute. Eth0 is always being renamed to some unique string. Nov 3 14:57:48 Tower kernel: igb 0000:04:00.0 eth0: mixed HW and IP checksum settings. Nov 3 14:57:49 Tower kernel: igb 0000:04:00.0 eth0: mixed HW and IP checksum settings. Nov 3 14:57:49 Tower kernel: eth0: renamed from veth3d8769f Nov 3 14:57:49 Tower kernel: device br0 entered promiscuous mode Nov 3 14:58:50 Tower kernel: device br0 left promiscuous mode Nov 3 14:58:50 Tower kernel: veth3d8769f: renamed from eth0 This is the iommu grps for the nics. IOMMU group 28: [8086:1533] 04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) IOMMU group 29: [8086:1533] 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) My motherboard has dual intel nics and ipmi. The ipmi will share use of a nic, but primarily defaults to the first nic. The ipmi has its own configured ip, so there are dual ips on the first nic, one for the ipmi and one for unraid. In order to pass the second nic to a VM, I added the line to flash boot up that hides that nic from being used by unraid. Any help is greatly appreciated. Thanks.
  4. Nvm found the answer here https://forums.unraid.net/topic/73060-same-pci-id-in-multiple-iommu-groups-solved/
  5. My motherboard has dual intel nics, each with their own chip. But in IOMMU groups, they have the same hardware id but are in different groups. This is how it shows up IOMMU group 28: [8086:1533] 04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) IOMMU group 29: [8086:1533] 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) How can I change this command in flash bootup to allow one of the controllers to be selected in VM templates? append vfio-pci.ids=XXXX:XXXX Thanks.
  6. I've found some configurations that half-way bridge the gap to baremetal. The cpu core assignments that give the best performance is somewhat perplexing. Passing these hyper-v features improved single thread performance noticeably. Not sure why. Found a blog that mentioned this hyper-v xml config gave him the best results. Reading the description for each feature, not obvious to me why this gets better performance. Cpu-z single thread score went up by ~20-30. More importantly, the performance loss with streaming is not as bad with these features on. Before, I'd see 30-40 less cpu-z single thread while streaming the desktop. Now, it is only ~15 less. Gta5 benchmark results improved a bit as well with these features on. <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <frequencies state='on'/> I've found that passing only the primary core threads 0,1,2,3 to the VM give the best overall performance and best single thread performance. I also have emulator pin on HT 4 and iothread on HT 5. Haven't noticed performance improvement with iothread pinning. I get ~320 cpu-z single thread, pretty close to the 330 on bare metal. I get the best gta5 benchmark results with this config. It is about half way in between my baremetal and vm results from earlier. I figure the remaining difference in performance is that baremetal is 2 threads per core, and that this vm config is only 1 thread per core. Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 19.245239, 106.438461, 89.463310 Pass 1, 80.630203, 154.502197, 128.269775 Pass 2, 61.948658, 142.904099, 109.883057 Pass 3, 5.634600, 159.941147, 119.195976 Pass 4, 35.715050, 166.900330, 103.768700 Time in milliseconds(ms). (Lower is better). Min, Max, Avg Pass 0, 9.395100, 51.960903, 11.177767 Pass 1, 6.472400, 12.402301, 7.796069 Pass 2, 6.997700, 16.142399, 9.100584 Pass 3, 6.252300, 177.474899, 8.389544 Pass 4, 5.991600, 27.999401, 9.636817 I was under the assumption that passing the primary core thread + HT thread pair is most optimal, but I am not seeing that. Originally I passed core threads 1,2,3 and their HT pairs 5,6,7 and emulator pin to HT 4. That gave much lower single and multi thread performance. It seems that all four cores are critical to getting best performance. Even though this config is 6 threads compared to 4, it yields worse performance because it is only 3 cores. I have also passed all cpu threads 0,1,2,3,4,5,6,7 to VM and that gives me a cpu-z single thread result of ~300 and the best multi thread result of ~1500. But the performance in gta is not as good as passing only the primary core threads 0-3. If I define emulator pin with this config, I get absolutely atrocious performance, so I didnt define emulator or iothread pin. I don't know why this is. This should give me the closest performance to baremetal, but it doesn't. So TLDR, to recap, I get the closest to baremetal performance in gta5 and best cpu-z single thread in VM with the following: - add the hyper v features mentioned above in the xml - pass only the primary core threads 0,1,2,3 and none of the HT pairs to VM, HT 4 is emulator pin - turn off spectre/meltdown mitigations in both host and vm (if baremetal also had them off) - pass physical NIC has better performance than virtual NIC, takes some load off cpus
  7. I am trying to get more cpu performance out of my win10 VM. Have noticed that in cpu intensive games, performance is quite lackluster, albiet the system is quite old. CPU is i7 sandybridge mobile 4core/8thread 3.2ghz all core turbo, 3.5ghz single core. I have passed 12GB of ram, dual channel ddr3 1600mhz. GPU is 980ti 6GB w/ nvidia 446 driver. On baremetal and with spectre mitigations off, I get ~330 cpu-z single thread benchmark score. Passmark v9 cpu mark total score is ~7600. This is the gta5 benchmark result at 1600x900 low settings: Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 62.167404, 119.561455, 102.000427 Pass 1, 94.056564, 165.343918, 139.509125 Pass 2, 77.531998, 155.506470, 125.293236 Pass 3, 89.976601, 162.171799, 136.439087 Pass 4, 48.572926, 200.867737, 125.084503 Time in milliseconds(ms). (Lower is better). Min, Max, Avg Pass 0, 8.363899, 16.085600, 9.803881 Pass 1, 6.048000, 10.631900, 7.167990 Pass 2, 6.430601, 12.897901, 7.981277 Pass 3, 6.166300, 11.114000, 7.329278 Pass 4, 4.978400, 20.587601, 7.994596 On VM I am using Q35-v4.2 OVMF, cpu host/cache passthrough, hyper-v = yes, and spectre mitigations off on both host and VM. I get ~260-270 cpu-z single thread. Interestingly, the passmark cpu mark score only drops to ~7300. This is the gta5 benchmarks at same settings: Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 16.969919, 86.502945, 72.574181 Pass 1, 46.607010, 125.070351, 101.227242 Pass 2, 47.106037, 136.561646, 94.093735 Pass 3, 64.095169, 130.890060, 99.548531 Pass 4, 35.704082, 161.464798, 88.457596 Time in milliseconds(ms). (Lower is better). Min, Max, Avg Pass 0, 11.560300, 58.927799, 13.779005 Pass 1, 7.995500, 21.455999, 9.878764 Pass 2, 7.322701, 21.228701, 10.627700 Pass 3, 7.639999, 15.601800, 10.045352 Pass 4, 6.193300, 28.008001, 11.304852 On the VM, I can only allocate 3 cores and their HT pairs. I have noticed passing all cores to VM gives quite bad performance. I maintain core 0 for the host and the HT thread for the vm emulator. This is my cpu assignment that I found has given the best performance: <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='7'/> <emulatorpin cpuset='4'/> </cputune> Since it is a headless server, I use nvidia gamestream to remote access. This further kills performance. I see the cpu-z single thread drop to ~230-240 with streaming the desktop. The above gta5 results were without any streaming. Since online mode is very unoptimized in this game, it can also be another 20-50% loss in performance. I see drops to 30fps in game quite often. I don't expect the performance loss to be entirely attributed to the one less core, especially with the huge drop off in cpu-z single thread results. Have tried all the following but nothing significantly bridges the gap between baremetal and VM cpu performance. 1. Changed to cpu model "Sandybridge" instead of cpu host passthrough. Resulted in significantly lower performance. 2. Passed through 2nd NIC instead of using virtual NIC. Resulted in slightly more performance. 3. Checked cpu turbo speeds on host. It does hit 3.2ghz all core in game on VM. 4. Isolated cpu cores used by VM, no noticeable improvement. 5. Changed cpu pinning and emulator pinning, but above config gives the best performance. 6. Updated kvm and virtio drivers. 7. Changed to i440fx. Resulted in slightly less performance. I am out of ideas to try. Anyone know what else I could try or have experience in this? Should this be the expected performance drop off to VM from baremetal, for a sandybridge era cpu?
  8. Those errors with the UD disk and the lsi sas2008 don't occur when its spin up/down. Can't figure exactly a pattern to when they occur. I put that drive on the intel ich or jmb585 and doesn't show that error any more. Disk is a toshiba 2.5" laptop hd. It has a weird behavior that unraid shows it spun down / no green light, but it hasn't spun down. The drive doesn't follow the spin down delay I set. On another note, I am getting errors with my new hitachi hc320 with the jmb585 only. They are exception emask/ READ FPDMA QUEUED / hard resetting link. This was only with testing the jmb585 and diskspeed. The benchmark runs fine and seems to be no problem, other than the errors showing up in log. The CRC error count also went up from 0. Tried different port/cables but that didn't resolve it. Haven't used the jmb585 for further testing. No other drives show errors with the jmb. I'll just stick to the lsi card now that its working with a pcie switch, and use ssd and that UD disk off the intel ich.
  9. I've been trying to squeeze more performance out of my VM. Looked into cpu exact model instead of cpu passthrough. Never did memory benchmarking on my VM before, but the difference in results were stunning in AIDA64. Here are the results with cpu exact model = sandybridge vs cpu passthrough. cpu passthrough AIDA64 cpu exact model AIDA64 But these results don't make sense. If it was the case, under cpu passthrough it should be slow as a snail for me. Tested with passmark v9 memory and cpu passthrough is faster than cpu exact model (2075 vs 2066 overall). Tested with GTAV benchmark, and cpu passthrough is noticeably faster, about 10% better frames. Has anyone noticed this as well? Found this thread on reddit with same behavior. Seen that for ryzen/threadripper users, they get worse l3 cache performance according to AIDA when using cpu passthrough. Perhaps this AIDA64 wrong results only applies to intel core users? Does AIDA use some algorithm based off detected frequencies? If so, that would seem potentially very inaccurate in some circumstances even on bare metal.
  10. It might be, but disk spin down/up is erratic due to it being used on an VM. These errors appear typically appear after a few hrs from each other, but no consistent interval. I'll try to set it up so it spins up when I can monitor it.
  11. I got one of these pcie switch cards and now can run both the gpu and the lsi hba off the single pcie slot. Asmedia pcie switch card I am using this with m.2 to pcie powered riser extensions. Had to use acs override to break up just that one cpu pcie root port, but it works great. I can pass the gpu to VM while the hba card stays on host. For graphics benchmarks that arent bandwidth intensive, i see roughly 1-2% less performance due to the added latency having to go through the pcie switch. My gpu was already running x4 link only, as I have it externally mounted in a separate case with its own power supply. The lower bandwidth doesnt affect the performance for what I use it for. Probably not worth the cost, but I plan to reuse this switch card down the road when I move my desktop parts to upgrade the unraid server. That motherboard is also mini itx, but it supports 2 way split x8 x8 bifurcation. I could run 3 pcie devices off one pcie x16 slot! Might be handy in a couple years when 5/10Gbps is available and present in cheaper switches. On another note, I started seeing these errors on one of my unassigned devices when connected to the lsi hba. Did not see this before when its connected to the intel ich or the asm1062. This drive does have huge and always increasing dma crc error count that I could never resolve it. Tried different cables and cleaning the connectors, but it didn't resolve it. But this drive has always passed extended smart tests. Lsi hba is flashed to IT mode with the latest P20 firmware. I will try switching the controllers to see if it goes away. Also bought the JMB585 card to test. kernel: sd 7:0:0:0: [sdf] tag#1505 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 kernel: sd 7:0:0:0: [sdf] tag#1505 CDB: opcode=0x28 28 00 61 b2 a6 98 00 00 88 00 kernel: print_req_error: I/O error, dev sdf, sector 1639098008
  12. If you buy a 8tb elements/easystore/mybook now, you will probably get an air CMR EDAZ drive, not helium. It is a whitelabel hitachi hc320 datacenter drive. It appears to be slightly faster than helium EMAZ/EZAZ drives, but it runs 10C hotter. It is probably running 7200rpm and not downspun to 5400rpm. Uses 1-2W more than the helium drives. The 10TB WD externals also have a likely chance of getting an air drive now. The 12TB are still all helium I believe. When running preclear on mine still inside the external case, I saw temps peak up to 63C! Room temp was ~26C plus. After shucking and putting inside the NAS with a 140mm low rpm fan blowing on it, temps become more reasonable. I see about 43-47C load, 40C or less at idle (not spundown) depending on room temp. But it is still 10C higher than my hitachi coolspin drives in the same backplane.
  13. I have a 4 drive array + 240GB ssd cache where one of the data drives recently failed. The array consists of two 3TB drives where one is parity, and two 2TB drives (one has failed). I got a 8TB drive I plan to shuck. It is running through preclear now. I want to make the 8TB the new parity drive. Is there a process I can rebuild the array and replace the parity drive, that doesn't require backing up data on the failed drive and doing a new config? Thanks.
  14. What are some typical problems that occur with port multipliers?