Leaderboard

Popular Content

Showing content with the highest reputation on 09/13/19 in all areas

  1. There have been several posts on the forum about VM performance improvements by adjusting CPU pinning and assignments in cases of VMs stuttering on media playback and gaming. I've put together what I think is the best of those ideas. I don't necessarily think this is the total answer, but it has helped me with a particularly latency sensitive VM. Windows VM Configuration You need to have a well configured Windows VM in order to get any improvement with CPU pinning. Have your VM configured as follows: Set machine type to the latest i440fx.. Boot in OVMF and not seaBIOS for Windows 8 and Windows 10. Your GPU must support UEFI boot if you are doing GPU passthrough. Set Hyper-V to 'yes' unless you need it off for Nvidia GPUs. Don't initially assign more that 8 GB of memory and set 'Initial' and 'Max' memory at the same value so memory ballooning is off. Don't assign more than 4 CPUs total. Assign CPUs in pairs to your VM if it supports Hyperthreading. Be sure you are using the latest GPU driver. I have had issues with virtio network drivers newer than 0.1.100 on Windows 7. Try that driver first and then update once your VM is performing properly. Get the best performance you can by adjusting the memory and CPU settings. Don't over provision CPUs and memory. You may find that the performance will decrease. More is not always better. If you have more than 8GB of memory in your unRAID system, I also suggest installing the 'Tips and Tweaks' plugin and setting the 'Disk Cache' settings to the suggested values for VMs. Click the 'Help' button for the suggestions. Also set 'Disable NIC flow control' and 'Disable NIC offload' to 'Yes'. These settings are known to cause VM performance issues in some cases. You can always go back and change them later. Once you have your VM running correctly, you can then adjust CPU pinning to possibly improve the performance. Unless you have your VM configured as above, you will probably be wasting your time with CPU pinning. What is Hyperthreading? Hyper threading is a means to share one CPU core with multiple processes. The architecture of a hyperthread core is a core and two hyperthreads. It looks like this: HT ---- core ---- HT It is not a base core and a HT: core ---- HT When isolating CPUs, the best performance is gained by isolating and assigning both pairs for a VM, not just what some think as the '"core". Why Isolate and Assign CPUs Some VMs suffer from latency because of sharing the hyperthreaded cpus. The method I have described here helps with the latency caused by cpu sharing and context switching between hyperthreads. If you have a VM that is suffering from stuttering or pauses in media playback or gaming, this procedure may help. Don't assign more cpus to a VM that has latency issues. That is generally not the issue. I also don't recommend assigning more than 4 cpus to a VM. I don't know why any VM needs that kind of horsepower. In my case I have a Xeon 4 core processor with Hyperthreading. The CPU layout is: 0,4 1,5 2,6 3,7 The Hyperthread pairs are (0,4) (1,5) (2,6) and (3,7). This means that one core is used for two Hyperthreads. When assigning CPUs to a high performance VM, CPUs should be assigned in Hyperthread pairs. I isolated some CPUs to be used by the VM from Linux with the following in the syslinux configuration on the flash drive: append isolcpus=2,3,6,7 initrd=/bzroot This tells Linux that the physical CPUs 2,3,6 and 7 are not to be managed or used by Linux. There is an additional setting for vcpus called 'emulatorpin'. The 'emulatorpin' entry puts the emulator tasks on other CPUs and off the VM CPUs. I then assigned the isolated CPUs to my VM and added the 'emulatorpin': <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> <emulatorpin cpuset='0,4'/> </cputune> What ends up happening is that the 4 logical CPUs (2,3,6,7) are not used by Linux but are available to assign to VMs. I then assigned them to the VM and pinned emulator tasks to CPUs (0,4). This is the first CPU pair. Linux tends to favor the low numbered CPUs. Make your CPU assignments in the VM editor and then edit the xml and add the emulatorpin assignment. Don't change any other CPU settings in the xml. I've seen recommendations to change the topology: <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> Don't make any changes to this setting. The VM manager does it appropriately. There is no advantage in making changes and it can cause problems like a VM that crashes. This has greatly improved the performance of my Windows 7 Media Center VM serving Media Center Extenders. I am not a KVM expert and this may not be the best way to do this, but in reading some forum posts and searching the internet, this is the best I've found so far. I would like to see LT offer some performance tuning settings in the VM manager that would help with these settings to improve performance in a VM without all the gyrations I've done here to get the performance I need in my VM. They could at least offer some 'emulatorpin' settings. Note: I still see confusion about physical CPUs, vcpus, and hyperthreaded pairs. CPU pairs like 3,7 are two threads that share a core. It is not a core with a hyperthread. When isolating and assigning CPUs to a VM, do it in pairs. Don't isolate and assign one (3) and not its pair (7) unless you don't assign 7 to any other VM. This is not going to give you what you want. vcpus are relative to the VM only. You don't isolate vcpus, you isolate physical CPUs that are then assigned to VM vcpus.
    1 point
  2. Simply put and taken from the FAQ and a HUGE advantage IMHO. 😀 Q2. Why has this docker image been created, as we already have a unraid preclear plugin?. A2. Due to the fact that plugins rely on the underlying OS (Slackware) in order to run, any changes made by Limetech can potentially lead to a breakage with the preclear plgin, this has historically happened a number of times and is unfortunately a fact of life. So how do we try and mitigate this from happening?, by using Docker, this then gives us a known platform from which to run the preclear script, and should reduce the chances of this happening. So if LimeTech makes system changes the Docker can stay in tact and still perform like intended.
    1 point
  3. Unclean shutdowns Sent from my NSA monitored device
    1 point
  4. Wow. Just curious, what was your motivation for posting this question? 1. Trying to be funny? 2. Too lazy to read the FAQ? 3. Honestly didn't understand what you read in the FAQ? If the answer is 3, I am truly sorry for my attitude here, and apologize profusely. What part of the FAQ explanation needs more work?
    1 point
  5. That's a new one for me. Check that file /config/etc/letsencrypt/renewal/website.com.conf to make sure it contains all the parameters including this line: dns_cloudflare_credentials = /config/dns-conf/cloudflare.ini
    1 point
  6. Take a look at the Tips and Tweaks Plugin. You can play with settings there to influence how Unraid handles its RAM and when it writes to disk. Definitely read the thread for suggestions.
    1 point
  7. Alright- I'm going to experiment with taking away "Upvote", leaving "Like" and adding a "Haha" and "Thanks" to keep things positive around here. As always, if you have feedback or comments on this, feel free to respond here or DM me! Cheers
    1 point
  8. Sneak peak, Unraid 6.8. The image is a custom "case image" I uploaded.
    1 point
  9. 1 point
  10. One of my biggest fears is my unRAID USB drive failing causing my entire system to go down. I'm wondering if there are any plans to allow for a secondary USB to be left in my unRAID machine and in the case of my boot drive failing, the secondary drive would become the primary with a notification in the gui of what happened.
    1 point
  11. Do you see any advanced virtualization tools coming to unraid such as VM cloning, snapshots, IP management of VM’s, oversee other VM’s of other unraid machines, and managing those?
    1 point
  12. What would be a thing that you would like unRAID to have that it does not currently have if developement time, money or other constraints were not an issue?
    1 point
  13. Can you promote SpaceInvaderOne? He's the only reason I use Unraid.
    1 point
  14. Thanks all, I am up and running now with Plex and VMs. Very much that fast due to SpaceinvaderOne`s very useful instrction videos. Running Plex with Nvidia RTX4000. Soon as data is copied I will do some testing.
    1 point
  15. Here's how I have it set up. In my /boot/config/go file, I've added this to forward traffic to a docker listening on port 1514: /usr/bin/sed --in-place "s/^#\*\.\* \@\@/\*\.\* @localhost:514/" /etc/rsyslog.conf # Reload the rsyslog daemon /etc/rc.d/rc.rsyslogd reload You can run it by hand if you don't want to reboot your server (/config/boot/go executes after boot). That line forwards data to my local Splunk docker on UDP:1514. As a bonus, here's my docker-compose file for Splunk: version: '2' services: splunk: image: splunk/splunk:latest hostname: splunk environment: SPLUNK_START_ARGS: --accept-license --answer-yes SPLUNK_ENABLE_LISTEN: 9997 SPLUNK_ADD: tcp 1514 volumes: - /mnt/cache/appdata/splunk/etc:/opt/splunk/etc - /mnt/cache/appdata/splunk/var:/opt/splunk/var ports: - "8000:8000" # - "9997:9997" # - "8088:8088" # - "1514:1514" - "514:1514/udp" restart: always If you have Nerd tools installed, make sure docker-compose is there. Then you just bring it up by just running in the directory where you placed the docker-compose.yml file. docker-compose up -d You should still log in to Splunk on port 8000, and make sure you and see your data. If not, let us know. You can test if data is getting to Splunk by running this from any Linux/Mac/unRAID host, and then typing a line like, "Testing!" nc -u localhost 514 Hope that helps!
    1 point
  16. Been a while since anyone asked about TLER on this forum. I believe the consensus is that Unraid is better used without TLER enabled. Unraid is not really RAID. https://forums.unraid.net/topic/1233-tler-time-limited-error-recovery/ https://forums.unraid.net/topic/32964-choosing-which-type-of-drives-to-use-for-what/?do=findComment&amp;comment=317766 https://forums.unraid.net/topic/69564-most-basic-barebones-dirt-cheap-components-that-work-without-severe-limitations/?do=findComment&amp;comment=635836 and others if you search
    1 point
  17. I solved this on my system: Asus Rampage IV Formula / Intel Core i7-4930k / 4 x NVIDIA Gigabyte GTX 950 Windforce, with all graphics cards passed through to Windows 10 VMs. The problem I was having was that the 3 cards in slots 2, 3 and 4 pass though fine, but passing through the card in slot 1, which is being used to boot unRAID, freezes the connected display. I explored the option to add another graphics card. A USB card won't be recognized by the system BIOS to use for POST. The only other card I could add would be connected by a PCIe 1x to PCIe 16x riser card (which did work by the way for passthrough, but a need to pass through a x16 slot), but it would require modding the mainboard BIOS to get it use it as primary. So I looked for another solution. The problem was caused by the VBIOS on the video card, as mentioned on http://www.linux-kvm.org/page/VGA_device_assignment: To re-run the POST procedures of the assigned adapter inside the guest, the proper VBIOS ROM image has to be used. However, when passing through the primary adapter of the host, Linux provides only access to the shadowed version of the VBIOS which may differ from the pre-POST version (due to modification applied during POST). This has be been observed with NVDIA Quadro adapters. A workaround is to retrieve the VBIOS from the adapter while it is in secondary mode and use this saved image (-device pci-assign,...,romfile=...). But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR). In my case I could not use the VBIOS from http://www.techpowerup.com/vgabios/. The file I got from there, and also the ones read using GPU-Z is probably a Hybrid BIOS, it includes the legacy one as well as the UEFI one. It's probably possible to extract the required part from the file, but it's pretty simple to read it from the card using the following steps: 1) Place the NVIDIA card in the second PCIe slot, using another card as primary graphics card to boot the system. 2) Stop any running VMs and open a SSH connection 3) Type "lspci -v" to get the pci id for the NVIDIA card. It is assumed to be 02:00.0 here, otherwise change numbers below accordingly. 4) If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci: echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 5) Readout the VBIOS: cd /sys/bus/pci/devices/0000:02:00.0/ echo 1 > rom cat rom > /boot/vbios.rom echo 0 > rom 6) Bind it back to vfio-pci if required: echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind The card can now be placed back as primary, and a small modification must be made to the VM that will use it, to use the VBIOS file read in the above steps. In the XML for the VM, change the following line: <qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/> To: <qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on,romfile=/boot/vbios.rom'/> After this modification, the card is passed through without any problems on my system. This may be the case for more NVIDIA cards used as primary adapters!
    1 point
  18. That seems to work. All my shares are there and I seem to be able to access and transfer data. So that brings me to the question of why didn't this just show up as my other computers? Is there any disadvantage on having it setup this way? I apologize for stupid questions, I'm not that keen on networking..... I do very much appreciate everyone's help. Was worried I was going to have to format and go back to 8.1....... Over the course of the last 15 years, I have discovered one immutable truth: Windows sucks at network discovery. Always has and always will. I have never had consistent results with even pure Windows networks and all of the computers showing up. I gave up trying to fix it and worrying about it a long long time ago. And no, there is no disadvantage
    1 point