Jump to content

Warrentheo

Members
  • Content Count

    109
  • Joined

  • Last visited

Community Reputation

7 Neutral

About Warrentheo

  • Rank
    Advanced Member
  • Birthday October 5

Converted

  • Gender
    Male
  • Location
    Earth
  • Personal Text
    Currently running:

    unRaid 6.4.1 Pro License Since 02/02/2018

    Asus IX Hero with i7-7700K and 64GB RAM

    Windows 10 Gameing VM with GTX 1070 passthrough

    5HD's with m.2 Raid-0 Cache

Recent Profile Visitors

423 profile views
  1. Currently I manually edit all my VM's to include an Emulatorpin section, and pin the core that I have UnRaid isolated to... Would it be possible to include another section of the Settings CPU Pinning page for emulatorpin? <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='7'/> <emulatorpin cpuset='0,4'/> </cputune>
  2. So just to confirm, docker runs on top of the Unraid kernel, and so the only way to fix this is to turn off CPU isolation? With Unraid-6.6.3, CPU pinning is much easier than it used to be so this is less of an issue, but with no isolation enabled does this cause minor performance issues or stuttering on the VM's when the Unraid kernel tries to process something on a pinned CPU? For instance I currently have Unraid isolated down to just core 0 and its hyperthread mirror, and then use emulatorpin to pin the VM's to emulate on that core and pin the remaining cores as needed to VM's/Docker's... But now that I am getting more into Docker, I am running into this issue because of it... Am I just stuck with turning off CPU isolation and just dealing with random performance bumps in VM's and Docker's?
  3. Howdy, I am just getting started with Kodi/LibreELEC, and testing out the various methods of getting a media server running with Unraid... So far the config that I think I would prefer has been the one on the default VM template for LibreELEC. I like how it acts like a docker container even-though it is a VM, and I like how it maps its /storage directory to an Unraid share instead of inside the VM... I have been able to also install a clean install of a LibreELEC VM with GPU passthrough, but would prefer to have the /storage folder outside the VM... The main issue with the current template is that it is a fairly old version of LibreELEC/Kodi, and appears to be modified to be specific for Unraid... The support thread for this VM template also doesn't appear to have any posts on it for about a year at this point... Has support for this VM template died? Also how modified from the original is the current "LibreELEC-unRAID.x86_64-7.0.1_1.img" image that is automatically downloaded? What would be involved in setting up a conversion script to take a file like: http://releases.libreelec.tv/LibreELEC-Generic.x86_64-8.2.5.img.gz (Current Stable as of this post) or http://releases.libreelec.tv/LibreELEC-Generic.x86_64-8.90.006.img.gz (Current LibreELEC Beta as of this post) and converting them to behave like the current "LibreELEC-unRAID.x86_64-7.0.1_1.img" modified version of the image? I am offering my support to help setup such a script if it is not to difficult, though I admit I am not yet the greatest coder of all time...
  4. My system after updating to 6.6.2 now shows a new error message during the long pause between the start of winbindd and the display of the network info/login prompt. Not sure what this error message affects, but it survived the rollback to 6.6.1 as well... This is the current bottom of my main terminal after bootup: Starting Samba: /usr/sbin/nmbd -D /usr/sbin/smbd -D /usr/sbin/winbindd -D cat: write error: Broken pipe unRAID Server OS version: 6.6.1 IPv4 address: <*> IPv6 address: <*> server login: Still investigating the issue on my end... qw-diagnostics-20181018-2058.zip
  5. This might be partly a Linux Mint (Ubuntu) question, but I am trying to mount the /home/username folder onto a SMB share on the UnRaid Host... I have gotten it added to the fstab, and it mounts... When I login the user with the share completely empty Linux Mint creates all the default folders like normal, so it has most of the permissions correct, but has other symptoms of not working correctly... Firefox gives error messages about bookmark folders being invalid, and google chrome is unable to do its first time launch (just gives the waiting indicator for about 2 minutes with no other indication that it is doing anything) //ip_of_UnRaid_Host/usernameshare /home/username cifs guest,noperm,uid=username,gid=usernamegroup,file_mode=0777,dir_mode=0777,cache=none,hard 0 0 UnRaid host has only original "root" user, and I am attempting to avoid creating any new ones... I also don't have any particular interest in using the [home] section of the SMB config, since this is just one share pointing to one user, and no others are foreseen... Do I have the mount options correct? What am I missing?
  6. My system currently seems to ignore the currently scheduler settings for Parity check, and runs more frequently than set... Currently it is set with the following settings under Settings-->Scheduler: PARITY CHECK Scheduled parity check: Weekly Day of the week: Sunday Day of the month: {------------} Time of the day: 23:00 Month of the year: {------------} Write corrections to parity disk: Yes However, it has run every day for the last three days, and currently has this info on the main Dashboard page: PARITY STATUS Parity is valid Last check incomplete on Fri 28 Sep 2018 08:47:20 AM CDT (today), finding 0 errors. Error code: aborted Next check scheduled on Sun 30 Sep 2018 11:00:00 PM CDT Due in: 2 days, 13 hours, 13 minutes This shows that it is scheduled for the currently correct settings, and gives the correct time for the next one to be due... I have been running with this issue for a while since it is just a minor annoyance when discovered, however the latest versions of UnRaid didn't seem to fix it, and changing the settings repeatedly with the WebGui doesn't seem to affect the issue, though it will always show the correct time on the dashboard for when it is due... Just updated to 6.6.1 this morning, was running 6.6.0 when the latest check tried to run last night...
  7. Warrentheo

    Multiple VM's off one SSD

    I have 2x 512gb 960 Evo's in Raid-0 Cache for Unraid, then I run the VM's with Raw image, SCSI driver, and have them set to Unmap... This keeps the files as small as possible. This in turn allows you to have quite a few images on the same drive, and only becomes an issue if you have multiple drives reading/writing to the images at the same time... Mostly an issue during VM bootup... Windows also has to have the drivers installed for the SCSI drivers during install... <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writethrough' discard='unmap'/> <source file='MainDrvWin10.SCSI.raw.img'/> <target dev='hdc' bus='scsi'/> </disk>
  8. Warrentheo

    6.6 will include QEMU 2.12?

    3.0 doesn't seem to add much that I would find useful, the part that seems to affect the most users is the Block Devices section... https://wiki.qemu.org/ChangeLog/3.0#Block_devices_and_tools I don't know enough about this stuff to have an informed opinion, but 3.0 looks to me to change quite a bit of the background workings, without a whole lot of actual changes to how it works... There does appear to be some minor changes for the QEMU drivers for Windows machines... But the currently installed 0.1.141-1 is the currently listed stable channel... Mostly 0.1.160-1 appears to change the way the drivers get built, and some reporting changes for Windows... https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG
  9. It just uses the same KVM / QEMU tools that a normal Linux system uses... I would not recommend trying to setup a new machine from scratch without a helpful tool of some sort, you could accidentally wipe the wrong drive, or tell some piece of hardware to cook itself... Just Google "KVM command line" if you want to learn more though...
  10. Warrentheo

    Any news on 6.6?

    https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.14.62 The current patch to 4.14 includes those as well... But you are correct, seems like a good idea to update to 4.14.62...
  11. Warrentheo

    Windows 10 / RX 560 GPU - .5 second pauses

    https://en.wikipedia.org/wiki/Message_Signaled_Interrupts Updating the driver tends to reset the MSI setting for that driver for me, I use this tool to re-enable it afterwards...
  12. Warrentheo

    Windows 10 / RX 560 GPU - .5 second pauses

    Try this program, and make sure MSI interrupts are turned on for the card... Reboot the VM afterward... MSI_util.exe
  13. Warrentheo

    Out of memory?

    Mine shows that also, but it is a plugin that just forces windows to have the UnRaid server be the LanMan Local Master, and so doesn't need updates... Unlikely to be the issue... Edit: This is not really a permanent solution, but you can try and use this plugin to see if that helps you get up and running enough to trim things down:
  14. Warrentheo

    CPU Governor state

    Mine does as well, but the CPU temps tell me that the setting I put is being enforced...
  15. Warrentheo

    CPU Governor state

    The best method I have seen is the "Tips and Tricks" plugin, seen here: (Allows you to set it as you see fit)