Jump to content

Warrentheo

Members
  • Posts

    311
  • Joined

  • Last visited

Everything posted by Warrentheo

  1. Windows should automatically format as GPT if you are doing a clean install, if instead you have a physical drive you are installing to, then you need to Google how to do a "DiskPart" command and use "Clean" on the drive during windows install (This will wipe the drive completely)... Also, you usually want to use OVMF and Q35, SeaBios is for legacy OS's at this point... Edit: Just remembered, SeaBios can't boot GPT drives anyway (1.9TB and MBR only)
  2. Need Diagnostics file... Also, where did you get the video BIOS file?
  3. Googling the error seems to be the direction needed for this one, there are quite a few answers for that... Using the form view of the UnRaid VM editor is probably likely the best method of fixing this issue... And if not, create a "New" VM, but point it to the old image files it needs, then add the drive there... Some minor manual editing would be needed after that... Make sure to do backups...
  4. I had these kinds of issues when I accidentally filled up the cache drive... KVM/QEMU doesn't handle the cache drive filling up very well... I would try freeing up some space on it and see what happens... Make sure to change the libvirt back if it fixes it... A command that worked when I had this situation for my system: fallocate -d <filenamegoeshere> Will "Dig" all the zeros out of large image files that are taking up too much space (like VM image files in the domain folders)... Just make backups, and be careful, you can accidentally overwrite files typing that command wrong... There are other more permanent solutions to this if that does help, look into changing the VM's image files to use the SCSI driver, and setting the discard='unmap' command for them in the VM configuration... But that can be a bit of a project on an already setup machine...
  5. Probably will need to post the Diagnostics file from the server to get details...
  6. Something else to consider, try changeing the CPU's passed through to the VM, especially leaving CPU-0 and its hyper-thread out if it has one, just to test... If that changes anything, consider adding: <emulatorpin cpuset='0,4'/> so it looks something like this: <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='2'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='3'/> <vcpupin vcpu='6' cpuset='7'/> <emulatorpin cpuset='0,4'/> </cputune> Make sure to edit the line so it matches the 2 CPU's for CPU-0 and its Hyper-thread if it has one...
  7. Something else to consider, when the system has few PCIe slots, they often are attached directly to the CPU instead of the PCH... This causes issues with the IOMMU groups sometimes... And motherboard manufacturers rarely publish the IOMMU group data for their motherboards... Edit: Also, that board doesn't have 2 x16 slots... it has 1... the other is an x16 running at x4...
  8. Sounds like you need to watch some videos from Space Invader, they helped me a lot... He has some other updated videos after this one if it doesn't fully solve your issue... Some key points though, change you VM machine type from the default that UnRaid gives (i440fx) and change it to Q35... If you can reinstall the VM with that, that is best, but if you have to keep your current VM than make sure to do some backups, then create a new VM but point it at the old image file (Don't skip the backups)... If you have an nVidia card that you are trying to passthrough, you will also need a video BIOS file for/from your card... The videos will help with that...
  9. This sounds like Message Signaled Interrupts (MSI) issues to me, there is a tool for that fixes the Windows registry for these issues, but I don't know if OSx VM's have an equivalent or if it is even necessary... Edit: Here is the Windows version of that tool if anyone needs it... MSI_util.exe
  10. Helped me quite a bit, and as I said it fixed several compatibility issues for me... Specifically PUBG really didn't like the i440fx and Windows UWP games didn't like it either (Sea of Theives) Related to that, I was literally posting this post when you were posting your original post...
  11. On my system, I fixed several performance issues, but more importantly several game compatibility issues by switching the machine type from the UnRaid default of i440FX, and used the Q35 machine type instead... Make a backup of the VM, and just try creating a new VM template that points to the old image file... Windows will usually take a very long time to boot the first boot, but usually makes it through... I had Windows license issues once doing something like that though, so don't skip the backups... Also for ucliker, for most systems, the x16 and x8 issue is really a non-issue... See this link... https://www.gamersnexus.net/guides/2488-pci-e-3-x8-vs-x16-performance-impact-on-gpus Edit: The x16 vs x8 issue boils down to affecting loading speed only sometimes, and some minor micro-stuttering when the GPU has to talk to the rest of the system... Frame rate and most of the user experience comes from the speed of the card crunching on what it already has loaded into VRam, and so bus speed is mostly irrelevant for that stuff... You can mostly game on a 4x or even a 1x slot just fine with some games, it just takes longer to load... The new PCIe v4 slots coming soon are mostly needed for ultra speed network cards and NVME Raid...
  12. This is just an opinion question, that may eventually be moved to a feature request... I personally have an Intel/nVidia GPU passthrough gaming VM... When I purchased UnRaid originally, on my first VM attempt I used the i440FX machine type since UnRaid recommended it for all Windows machines... A few months ago, when I was having other issues with my system, I decided to reinstall the VM and do some experiments on performance for my system... The bottom line of that testing was the Q35 Machine type was noticeably faster, and more importantly it fixed several compatibility issues I was having with some of the games I was having issues with on the first VM attempt... I believe that this is due to the i440FX not supporting PCIe devices, and even though it didn't affect bus speed, I think several of the games anti-cheat software or initialization code was looking for the PCIe bus and not finding it... I have flipped all my VM's to use Q35 ever since, and have not had issues with that decision... Bottom line, other than legacy OS's, is there still any reason to use the i440fx machine type? And if not, is there any reason not to flip the default Windows VM templates or at least the Windows 10 template to use Q35 by default?
  13. On a related note, if you have a large file that takes up lots of space, but you know most of it is zeros (most hard drive image file backups for instance) you can use the fallocate -d command on it, and Linux will go through and un-map all the space that currently stores nothing but zeros from it, had it free up a bunch of hard drive space on some of my VM backups... Just be careful with the command, it can easily accidentally delete files...
  14. Unlike Windows, Linux allows files to have a different number for "Space Used" and "Size on Disk"... This is because Linux allows files filled with zeros to have a 40GB size for instance while taking up the space needed to store the file name... Causes some issues for the humans that need to read this stuff sometimes... The files under cache/system especially will have lots of empty space allocated to them, don't worry too much about it... The attached is an list of my libvirt file for instance, see that it shows that the file is both 205M and 2.0G at the same time...? The little meter bar on the "Main" page for UnRaid tends to show disk free space just fine... /mnt/cache/system/libvirt# ls -lhsa total 205M 0 drwxrwxrwx 1 root root 22 Mar 8 2018 ./ 0 drwxrwxrwx 1 nobody users 322 Nov 9 11:45 ../ 205M -rw-rw-rw- 1 nobody users 2.0G Jan 11 16:10 libvirt.img
  15. The keyword you are looking for is "IOMMU" or "Intel VT-d" , a quick google search for ASUS P8B iommu should tell you the answer...
  16. Currently I manually edit all my VM's to include an Emulatorpin section, and pin the core that I have UnRaid isolated to... Would it be possible to include another section of the Settings CPU Pinning page for emulatorpin? <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='7'/> <emulatorpin cpuset='0,4'/> </cputune>
  17. So just to confirm, docker runs on top of the Unraid kernel, and so the only way to fix this is to turn off CPU isolation? With Unraid-6.6.3, CPU pinning is much easier than it used to be so this is less of an issue, but with no isolation enabled does this cause minor performance issues or stuttering on the VM's when the Unraid kernel tries to process something on a pinned CPU? For instance I currently have Unraid isolated down to just core 0 and its hyperthread mirror, and then use emulatorpin to pin the VM's to emulate on that core and pin the remaining cores as needed to VM's/Docker's... But now that I am getting more into Docker, I am running into this issue because of it... Am I just stuck with turning off CPU isolation and just dealing with random performance bumps in VM's and Docker's?
  18. Howdy, I am just getting started with Kodi/LibreELEC, and testing out the various methods of getting a media server running with Unraid... So far the config that I think I would prefer has been the one on the default VM template for LibreELEC. I like how it acts like a docker container even-though it is a VM, and I like how it maps its /storage directory to an Unraid share instead of inside the VM... I have been able to also install a clean install of a LibreELEC VM with GPU passthrough, but would prefer to have the /storage folder outside the VM... The main issue with the current template is that it is a fairly old version of LibreELEC/Kodi, and appears to be modified to be specific for Unraid... The support thread for this VM template also doesn't appear to have any posts on it for about a year at this point... Has support for this VM template died? Also how modified from the original is the current "LibreELEC-unRAID.x86_64-7.0.1_1.img" image that is automatically downloaded? What would be involved in setting up a conversion script to take a file like: http://releases.libreelec.tv/LibreELEC-Generic.x86_64-8.2.5.img.gz (Current Stable as of this post) or http://releases.libreelec.tv/LibreELEC-Generic.x86_64-8.90.006.img.gz (Current LibreELEC Beta as of this post) and converting them to behave like the current "LibreELEC-unRAID.x86_64-7.0.1_1.img" modified version of the image? I am offering my support to help setup such a script if it is not to difficult, though I admit I am not yet the greatest coder of all time...
  19. My system after updating to 6.6.2 now shows a new error message during the long pause between the start of winbindd and the display of the network info/login prompt. Not sure what this error message affects, but it survived the rollback to 6.6.1 as well... This is the current bottom of my main terminal after bootup: Starting Samba: /usr/sbin/nmbd -D /usr/sbin/smbd -D /usr/sbin/winbindd -D cat: write error: Broken pipe unRAID Server OS version: 6.6.1 IPv4 address: <*> IPv6 address: <*> server login: Still investigating the issue on my end... qw-diagnostics-20181018-2058.zip
  20. This might be partly a Linux Mint (Ubuntu) question, but I am trying to mount the /home/username folder onto a SMB share on the UnRaid Host... I have gotten it added to the fstab, and it mounts... When I login the user with the share completely empty Linux Mint creates all the default folders like normal, so it has most of the permissions correct, but has other symptoms of not working correctly... Firefox gives error messages about bookmark folders being invalid, and google chrome is unable to do its first time launch (just gives the waiting indicator for about 2 minutes with no other indication that it is doing anything) //ip_of_UnRaid_Host/usernameshare /home/username cifs guest,noperm,uid=username,gid=usernamegroup,file_mode=0777,dir_mode=0777,cache=none,hard 0 0 UnRaid host has only original "root" user, and I am attempting to avoid creating any new ones... I also don't have any particular interest in using the [home] section of the SMB config, since this is just one share pointing to one user, and no others are foreseen... Do I have the mount options correct? What am I missing?
  21. My system currently seems to ignore the currently scheduler settings for Parity check, and runs more frequently than set... Currently it is set with the following settings under Settings-->Scheduler: PARITY CHECK Scheduled parity check: Weekly Day of the week: Sunday Day of the month: {------------} Time of the day: 23:00 Month of the year: {------------} Write corrections to parity disk: Yes However, it has run every day for the last three days, and currently has this info on the main Dashboard page: PARITY STATUS Parity is valid Last check incomplete on Fri 28 Sep 2018 08:47:20 AM CDT (today), finding 0 errors. Error code: aborted Next check scheduled on Sun 30 Sep 2018 11:00:00 PM CDT Due in: 2 days, 13 hours, 13 minutes This shows that it is scheduled for the currently correct settings, and gives the correct time for the next one to be due... I have been running with this issue for a while since it is just a minor annoyance when discovered, however the latest versions of UnRaid didn't seem to fix it, and changing the settings repeatedly with the WebGui doesn't seem to affect the issue, though it will always show the correct time on the dashboard for when it is due... Just updated to 6.6.1 this morning, was running 6.6.0 when the latest check tried to run last night...
  22. I have 2x 512gb 960 Evo's in Raid-0 Cache for Unraid, then I run the VM's with Raw image, SCSI driver, and have them set to Unmap... This keeps the files as small as possible. This in turn allows you to have quite a few images on the same drive, and only becomes an issue if you have multiple drives reading/writing to the images at the same time... Mostly an issue during VM bootup... Windows also has to have the drivers installed for the SCSI drivers during install... <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writethrough' discard='unmap'/> <source file='MainDrvWin10.SCSI.raw.img'/> <target dev='hdc' bus='scsi'/> </disk>
  23. 3.0 doesn't seem to add much that I would find useful, the part that seems to affect the most users is the Block Devices section... https://wiki.qemu.org/ChangeLog/3.0#Block_devices_and_tools I don't know enough about this stuff to have an informed opinion, but 3.0 looks to me to change quite a bit of the background workings, without a whole lot of actual changes to how it works... There does appear to be some minor changes for the QEMU drivers for Windows machines... But the currently installed 0.1.141-1 is the currently listed stable channel... Mostly 0.1.160-1 appears to change the way the drivers get built, and some reporting changes for Windows... https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG
×
×
  • Create New...