bungee91

Members
  • Posts

    744
  • Joined

  • Last visited

Posts posted by bungee91

  1. On 3/4/2023 at 1:32 PM, SimonF said:

    MultiGPU will require 6.12.

    Has this been incorporated, as I am not seeing this behavior with two GPU's selected on 6.12.6.

    I can select one or the other and I will see the one selected. However selecting both and I only see the one on the dashboard with no way to toggle between them (that I can see anyhow). 

  2. Admit to being out of the loop here, but I did search this topic without a clear answer.

    Does this release resolve the "nomodeset" need in the syslinux.cfg for Aspeed VGA becoming a funny color?

     

    I've removed it to test, but haven't booted into 6.6 as of yet.

    Currently on an X11SSM-F MB. 

    • Upvote 1
  3. That's awesome that you found a Docker that seems to be updated, and with a little ingenuity got it to work with UnRAID.

    I may have to check it out.

     

    As for the guide, yeah I know that it's all about the Slicetm, no worries there.

    If you use Kodi you can apparently use the unofficial HDHR DVR PVR add-on here that will present the guide in a grid.

    I like the Emby guide, and it works pretty well at this point (recent redesign for AndroidTv, and numerous fixes for Theater to make it as good as it is now).

  4. I too would 2nd this request, (I actually started one over a year ago here, and found this thread looking for any updates to this topic).

    This is unfortunate as basic no frills NAS boxes have an easy install path, where UnRAID at this time does not (UnRAID has things basic NAS's don't as well, just saying).

     

    Remember it is about choices, so while alternative options are great to know exist most people know they do and for some reason have chosen against them.

    Emby (which I have tried to use for the last year/bought premiere) continues to have issues with Tv playback, recordings, guide lineups, etc... They keep fixing these issues, but it is also mind numbingly obnoxious to expect a recording to you know just work.

     

    To OP: I plan to try this out now that they offer the yearly at $35 annually. At this point I think the best way on UnRAID is unfortunately a VM with it installed.

    I plan to attempt this in Win10 with a HD passed to the VM for recorder storage (apparently it doesn't record to network storage which is unfortunate).

    I hope someone with ambitions to use this software and using UnRAID comes along and makes a well supported Docker or Plugin (yes, not preferred) to easily install/manage this on UnRAID.

     

     

  5. I'm just wondering what the best way would be to utilize my 2nd on-board NIC with UnRAID/Docker/VM's?

    Why?  I have a 2nd on-board Intel NIC currently doing nothing, a cable routed to it, and no traffic.

     

    I ask because I do a decent amount of traffic using Emby and network HD Homerun tuners (5 total) and I think at times my single default NIC is becoming saturated or causing small glitches in the stream. 

     

    I currently have 3 NIC's, one is a dedicated IPMI port, and the other two are Intel I210's.  

    Motherboard is a SuperMicro X11SSM-F.

     

    I'm guessing bridging with eth0 and eth1 is the most straightforward way to utilize both of them, correct?

    In this way both can be used for various traffic for Docker and VM's, right?

     

    Sorry, I've read a bit but it gets confusing and networking is not my favorite topic to geek out about. I have an unmanaged switch, and just looking to have a bit more throughput for my uses. Currently on UnRAID 6.2.

  6. After upgrading from 6.2.4 to 6.3.0, I get the following in my VM/qemu log.

     

    2017-02-05T20:29:39.741418Z qemu-system-x86_64: -device vfio-pci,host=08:00.0,id=hostdev3,bus=pci.0,addr=0x9: Failed to mmap 0000:08:00.0 BAR 2. Performance may be slow
    2017-02-05T20:29:39.746332Z qemu-system-x86_64: warning: Unknown firmware file in legacy mode: etc/msr_feature_control

     

    I looked back at some old diagnostics from a couple of weeks ago and I did not get this message.  This is for an add-on USB 3.0 controller that I have passed through to the VM.  I have not fully tested the USB but can tell you that it is working on some level as I am typing and using a mouse that is plugged into this device....not sure that the performance may be slow means.

     

    I am post here vs KVM forum since I did not get this message previously.

     

    Any suggestions?

     

    Hi snoopin,

     

    Quick question:  is there any felt impact from the upgrade in terms of your VM, performance, or anything with your USB devices attached to that controller?  If not, the message is likely harmless, but curious if you can trace this back to any symptoms you notice when using the VM.

     

    @jonp I detailed this and requested the inclusion of a patch back in RC4 related to the USB card causing this issue, details are here http://lime-technology.com/forum/index.php?topic=53689.msg515705#msg515705 with the link to the fix, I have not noticed any degradation of performance in my VM with this issue.

     

    That's a patch in qemu and we are not comfortable adding that, even if it is from Alex W.  This is because once it's added, we have to maintain it.  Probably if this patch is needed it will find it's way to upstream anyway, but it appears to just be cosmentic:

     

        "NB, this doesn't actually change the behavior of the device, it only

        removes the scary "Failed to mmap ... Performance may be slow" error

        message.  We cannot currently create an mmap over the MSI-X table"

     

    Okie dokie, thanks for chiming in.

  7. After upgrading from 6.2.4 to 6.3.0, I get the following in my VM/qemu log.

     

    2017-02-05T20:29:39.741418Z qemu-system-x86_64: -device vfio-pci,host=08:00.0,id=hostdev3,bus=pci.0,addr=0x9: Failed to mmap 0000:08:00.0 BAR 2. Performance may be slow
    2017-02-05T20:29:39.746332Z qemu-system-x86_64: warning: Unknown firmware file in legacy mode: etc/msr_feature_control

     

    I looked back at some old diagnostics from a couple of weeks ago and I did not get this message.  This is for an add-on USB 3.0 controller that I have passed through to the VM.  I have not fully tested the USB but can tell you that it is working on some level as I am typing and using a mouse that is plugged into this device....not sure that the performance may be slow means.

     

    I am post here vs KVM forum since I did not get this message previously.

     

    Any suggestions?

     

    Hi snoopin,

     

    Quick question:  is there any felt impact from the upgrade in terms of your VM, performance, or anything with your USB devices attached to that controller?  If not, the message is likely harmless, but curious if you can trace this back to any symptoms you notice when using the VM.

     

    @jonp I detailed this and requested the inclusion of a patch back in RC4 related to the USB card causing this issue, details are here http://lime-technology.com/forum/index.php?topic=53689.msg515705#msg515705 with the link to the fix, I have not noticed any degradation of performance in my VM with this issue.

  8. True, I can certainly see the use case as recommended with more advanced manual XML edits performed for a VM.

    I guess we're both telling the OP, no not really, or easily anyway.  :P

     

    There was a thread on here to hot swap CPU's, as libvirt apparently allows for such actions.

    Seemed a bit complex, but a script was posted in that respective thread. Doesn't help too much with memory allocation however.

  9. It works with Linux and osx VMS.

    However  windows VMS don't like it because of uuid

    eg

    <uuid>98e3dac0-cffc-cffe-807e-1d9b4ebd1cc1</uuid>

    Windows uses the uuid for actvation. So a different xml template will have a different uuid. and templates with the same uuid is not possible.

     

    A way around it is to

    1. Setup first windows VM giving it a boot vdisk that is as small as possible for the OS to work. Then have a second much larger vdisk and

    use as the second hard drive's drive'. Then install all programmes onto this larger vdisk  (D drive) and make the location for documents desktop etc to be on the D drive.

     

    2. Then create second windows VM in the same way. With a small vdisk for the os. But for this ones  second vdisk to the same vdisk as the first VM's uses. Choose different CPU count memory etc for this template if you want.

    Install the same programmes as the first VM and to the same location (second vdisk) and set the location of the docs/desktop etc to the same as the first windows VM.

     

    This way both vms will be the same and documents the same between them but can have different cpus etc

     

    Wouldn't the biggest downfall of this solution be that you'd have to have two Windows licenses?

    At that point it'd just be easier to edit the VM each time as opposed to keeping two VM's patched and updated.

  10. Bumping this request due to an update to Emby Beta requiring IPV6 to function.

     

    Emby is currently planning to fix this issue by (likely) adjusting their Docker release for UnRAID back to assigning a socket using IPv4.

    This is unfortunate considering they just made changes to their packages, which was for 10 or so Linux distro's, and the only one that had this issue reported was me trying to use this on UnRAID (I also now realize this request/thread was started by an Emby dev 1.5 years ago).

    In my (admittedly) limited research, IPv6 was formalized in 1998, included in Linux kernel 2.2 (experimental), and removed from experimental status as of 2.6 and has been included since.

     

    So, what needs to be done to support this?

    It's not exactly going away.  :P

  11. In the VM tab, the log to display is the far right icon on the line with each VM listed (may have to be in advanced view, can't recall).

    It will open in a new tab, and display stats.

    I believe this is also included in a full diagnostic output if you collect one of those also.

  12. Is it off, or is it paused?

     

    Does the VM log show anything useful?

    If paused, have you ran out of disc space on the VM image and location where it is stored?

    Have you followed the wiki to enable the "high performance" power profile, and disable all sleep/hibernate from the VM?

  13. I wanted to give an update to this, and thank everyone for the help and diagnosing of this issue!  :-*

     

    This was entirely related to a software issue (yes, crazy huh!? I seriously wouldn't have thought this either).

    Somehow/someway the VM was causing this issue, not certain how exactly (corrupt subsystem files within the VM?), just know the issue is completely gone after a fresh Windows 10 VM install.

    If I turn back on that specific VM, they come back. I have tested changing Windows drivers, virtio drivers, etc... and it always would come back on that specific VM (odd).

     

    Even decided to push it, and install both my R260X to the one VM and the GTX950 to my primary, and all has been well for some time now.

    If I would have known it was a software/VM install causing all of this trouble (doesn't seem too plausible), I would have scorched the earth and everything it touched a long time prior.  >:(

     

    All of my hardware passed extensive testing with 48+ hours of Passmark Memtest86, HCI Memtest within Windows heavily stressing the IMC and CPU, and most all other tests/tasks I could throw at it.

     

    Anyhow, thanks again, this is marked as solved!

  14. Wondering if you could add a patch to the next release, or the following?

     

    Issue: Receiving "Failed to mmap 0000:0e:00.0 BAR 2. Performance may be slow" in my VM log with the use of a Fresco Logic FL1100 USB 3.0 Host Controller.

    Problem is reported here, using the same card chipset http://lists.nongnu.org/archive/html/qemu-discuss/2016-10/msg00009.html

     

    Detailed response from Alex Williamson:

        As reported in the link below, user has a PCI device with a 4KB BAR

        which contains the MSI-X table.  This seems to hit a corner case in

        the kernel where the region reports being mmap capable, but the sparse

        mmap information reports a zero sized range.  It's not entirely clear

        that the kernel is incorrect in doing this, but regardless, we need

        to handle it.  To do this, fill our mmap array only with non-zero

        sized sparse mmap entries and add an error return from the function

        so we can tell the difference between nr_mmaps being zero based on

        sparse mmap info vs lack of sparse mmap info.

       

        NB, this doesn't actually change the behavior of the device, it only

        removes the scary "Failed to mmap ... Performance may be slow" error

        message.  We cannot currently create an mmap over the MSI-X table.

     

    Patch is detailed within a later post (scroll down) here https://lists.nongnu.org/archive/html/qemu-discuss/2016-10/msg00023.html

     

    If I should post this elsewhere, please let me know (it is related to RC4 as I see this issue while running it, but not directly).

    Thanks.

     

  15. Questions, one possible dirty solution.

     

    Where did the rom for the 780 come from, did you "cat" it yourself or grab it from the techpowerup?

    Was there a condition other than this that prompted you to supply the rom for the card?

    You have iGPU in the BIOS set as the primary GPU to boot from, and UnRAID console is displayed on that card?

     

    Also (yes, yes, it's RC...), have you tried using 6.3.0? New QEMU, kernel, etc... worth trying, and extremely unlikely you're putting your data (if you even care; some don't) at risk.

     

    Have you toggled advanced in the VM editor and turned off Hyper-v? This issue was supposedly fixed with Nvidia cards for 6.2, but could solve your issue for now and worth trying.

    May have to reboot a time or two for it to take effect.

     

  16. A lot of your questions can be answered in this thread http://lime-technology.com/forum/index.php?topic=49051.0 please take a read.

     

    To your specific questions, it is best to assign the core and its related thread pair to the same VM.

    This output should be shown in the system devices screen, if not there, it's within UnRAID if you look around (not at my machine currently).

     

    It is likely that 0-17 are cores, and 18-35 are the thread pairs, repeat this for the 2nd CPU.

    So for your question, if you wanted 4 actual cores your example is correct, however it is recommended to not use core 0, as UnRAID favors the use of that one.

    However I would add the thread pairs along with that, which is likely (and you need to check) CPU0-3 and 17-20 for your 1st example above; VM1.

     

    As to reserving cores, you certainly have enough of them if you choose to go that route!  ;)

    In my personal observations (and opinion) I don't fell the need to isolate cores unless my VM shows signs of unacceptable latency, or just general lag.

    If you have these issues, or notice things as such, I'd then recommend to isolate the cores and continue testing to see if it helped alleviate that condition.