bungee91

Members
  • Posts

    744
  • Joined

  • Last visited

Everything posted by bungee91

  1. If you do use a riser/adapter, you may have to do the following trick to have the card initialized by your motherboard. https://lime-technology.com/forum/index.php?topic=43948.msg419720#msg419720
  2. This is more motherboard dependent, as the split between the XHCI and EHCI controller can be different (integrated into the chipset). This can also involve additional USB ports that may be provided by an additional I.C.. (look at your manual for this, 4 of my rear ports come from another add on IC (I believe ASMedia)) It is also dependent on settings in the BIOS, as some can coax the controller to come up as separate device. For the most part though, yes you can split it, and yes they will be in their own IOMMU grouping as listed, but you'll have to play with that a bit. Currently with my BIOS settings, I only see 1 USB controller, however I know if I toggle the "handoff" for EHCI or XHCI I can get this to look as 3 separate controllers, which are in their own IOMMU grouping.
  3. You don't NEED the extra lanes, they just allow additional lanes per PCIe slot. If you look at the MB manual, you'll likely find a section that lays out the lanes per PCIe (16x length) for either the 28 lanes CPU or 40 lane CPU. Since it doesn't look as if you'll be doing any gaming with 5450's, you have more than enough lanes from the CPU to support your needs (even with just 2 VM's you're good if you choose to regardless, however at that point upgrade your GPU 1st). Without looking I'd assume if you have 4 PCIe 16x (length) slots, the 28 lanes would be divided as 16/4/4/4 or 8/8/8/4. For HD playback and basic 2d/3d tasks, even a 1X lane is sufficient. Grouping for IOMMU will not be an issue, as it comes directly from the CPU which supports ACS. The rest of your expansion slots (1x) will likely come from your MB connections, and those will also be in there own separate IOMMU groups also. Edit: One other thing. In my testing with an Asus 5450 card, it was a P.I.T.A., P.O.S. and I should have pulled an Office Space (the movie; that copier had it coming!) on it. However there are other users who use the 5450 with very good results. This could be a manufacturer/BIOS thing, as Asus supposedly has had some finicky GPU's (for virtualizing from what I read at that time). Anyhow, you've been warned, and if you do have issues after installing drivers, performing VM resets, or powering back on a VM from a previous on/off state, I'd throw it at the wall, or someone you don't like.... (Damn I wish I could get that wasted time back! :'( ) I tested this same GPU in 2 seperate Z97 MB's, and my current X99 with the same results (however on the X99 it worked OK in Openelec.. odd).
  4. First off, welcome! No worries, everyone was new at one point, and in varying capacities of knowledge. Your best best is, read the wiki (official documentation here) http://lime-technology.com/wiki/index.php/Official_Documentation This will get you started. There is also a LOT of good wiki info that is unofficial, written by users available also. 2nd best advice, search (did I mention) search, as a LOT of your questions are likely well described already. As for not knowing Linux, no biggie, this is less of an issue now than it ever was previously. LT (LimeTech) and the community have done a lot of work to keep new users from having to use the console or SSH to perform regular actions. If you want to do such things you still can, just unadvised and (fortunately) not needed very often.
  5. This was talked about in one of the previous beta's as a discovery by a user (JohnnieBlack? maybe). Anyhow, from what I recall this is a big deal as the array is available while clearing (as this was not the case previously), however it only clears the drive. Preclear (by default) also does a post read after the clear to verify the SMART parameters have not changed, which is a good indication of a problematic drive. (someone else will likely have more/better things to add to this clarification)
  6. I had 2 issues upgrading from 6.19 to 6.2b21 (not sure I have the logs, can look if you really want). Updated from GUI/plugin, all is well, reboot server. Message: "boot failed" on boot (not a BIOS "no boot device") but a statement of boot failed. Check to make sure boot order is right, legacy (non UEFI) boot, all is well, boot failed. Remove drive, pop in Windows computer, re-run make_bootable.bat (as admin), finishes correctly, fixed the booting issue. Previous VM's are listed correctly, but will not start by default. On VM edit the Primary Vdisk is set to none (or auto if that was an initial option), have to set to manual, however it then knows the right location. Works, however if I stop the VM and go to edit, it is now (always) listed as Primary vdisk location: none, I then edit to manual, it pops up with the right location, save. All is good, but if I edit again, back to none until I change it back to manual.
  7. The opposite of this, as 6.2 now has iommu=pt as the default (was not previously, so you're basically turning it back to "normal" mode): "Sets the IOMMU into passthrough mode for host devices. This reduces the overhead of the IOMMU for host owned devices, but also removes any protection the IOMMU may have provided again errant DMA from devices. If you weren't using the IOMMU before, there's nothing lost. Regardless of passthrough mode, the IOMMU will provide the same degree of isolation for assigned devices." http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-3-host.html
  8. I do agree with the rest of what you said, good points, but I couldn't help feeling this was a little unfair. They have been working very hard on both NAS and VM features for quite awhile, and the VM side is still very new, not quite unfinished, not completely stable. If you feel it was a mistake not doing this sooner, then what would you have dropped that they HAVE added? For awhile it seemed to some that they were concentrating too much on VM improvements and PR, so I think many are glad to see time put into the NAS side recently in adding dual parity. Give them time, and as you have, make your voice heard, in order to help them decide feature priorities for the future. Very true, and the improvements to the VM capabilities have really been terrific the last year. I think the GUI boot was supposed to be a good middle ground, and I do kind of like it (now that I tried it), however I may need to post my experiences with "stealing" it's GPU from it (it doesn't seem to like that, but the console doesn't mind). Honestly if we could figure out a way to (somehow) reinitialize the GUI boot back to the primary display once a VM is shutdown, then I think my use case would be pretty covered (Example: need to stop array, shutting down all VM's, primary VM shuts down, local GUI boot pops back up on that GPU. Restart array, VM restarts, GPU is re-assigned to VM.. Me? a happy camper! ). I think that personally it has been a good balance of server "stuff" and VM "stuff" as of late. The inclusion of dual-parity is enormous, however the changes to the VM manager have also been extensive. I'm certainly getting my $$'s worth (and now my Pro license is unlimited, booyay!).
  9. You're limited (to a point) with the design of the CPU and the # of PCI Express lanes coming from the CPU directly. Without ACS support there is also no specific isolation, and is really dependent on how the motherboard manufacturer decides to "wire" the PCIe slots. The Z97 chipset - for instance and an i7-4790k (a common choice) has 16 lanes directly from the CPU, no ACS support, and another 8 lanes from the chipset itself. X99 chipset is also limited to the same 8 lanes from the chipset, however the CPU has either 28 lanes (5820k) or 40 lanes (5930k, 5960X). Most of the time (like always) the PCIe 16X slots are wired directly to the CPU. If we only have 16 lanes, then they share that bus. Without isolation between them directly, it is very likely they end up in the same IOMMU grouping. So if the CPU has more of these lanes, it is very likely the MB will have more 16x slots, and the more we have, the more video cards we can run. Add to that ACS, and you're pretty much guaranteed that they will be isolated, and assignable as needed. Now the Z170 chipset (Skylake) is a more advanced chipset in ways, in that it's connection is a faster DMI 3.0, which also has 20 pci lanes from the chipset (used for 1X slots, Wifi, extra SATA, sound, etc...). However a 6600 or 6700k only have the same 16 PCI lanes from the CPU, which again is primarily wired to the PCIe 16X slots. Since a lot of us use "gaming" motherboards (speaking X99 here), we get 4 16X slots (not wired per say, typically some way of 40 total ex:16/16/4/4, 16/8/8/8). However if the board is designed more for a server, you will find that some have more 16X length slots, wired as 8/8/8/8/8 which would support 5 GPU's, all operating at 8x PCIe speed. You can also just buy specialty cards made for 1x slots, which will also work great for basic usage/HD, but limited for gaming use (ie: not recommended).
  10. Fair points, however responding to the support requests and issues here on the forum there seems to be a far higher number of Nvidia people complaining about issues than AMD ones. Now this may also be that Nvidia is currently the "better buy" (I do not know, but is a likely thing), however price wise AMD remains competitive in most price points. Also, AMD has never (in my knowledge) released a software update that (basically) broke virtualizing their GPU, causing the "error 43" message. Then the decision to use the older software and remain with hyper-v, or disable it and use the newest. Now there is clearly a work around for this, however this is very recent. Even though they acted like "we don't know what you're talking about, but we don't plan to fix it", it sure seems intentional and they have a product that directly supports that kind of use-case. I sound like a fanboi, but I'm honestly not, in any way, to almost anything I purchase. I had an R260X that worked extremely well, it had to be RMA's recently (good 'ol AMD), I recently purchased a R370 OC for testing, and usage until mine comes back. I've had zero issues with this card, plug and play. Many users here use the 6450 (a reasonably old card) without any problems also. On that same token I have 3 GT 720's that are fantastic, take anything I throw at them. I commend the recommendation for a processor with ACS as it is absolutely for exactly what we're trying to do, however I think this one is a bit biased (but I don't get support emails/PM's like you likely get a LOT of). Anyhow, just having a friendly conversation.
  11. If the ACS override is working, you should have exactly what you were expecting, that is: Each device within their own IOMMU group. This would be listed in UnRAID showing exactly that, as it then has no idea what the true grouping is, just that the patch is doing its thing. I have not heard of this patch not working on Z97, so this is something worth investigating or asking further about (VFIO group would be where I'd go to ask this). What is odd to me is that you didn't receive the error when starting the VM about "failed to get group 1, blah blah blah" and that it attempted to run with clearly bad results. Did you use the ACS override option in the GUI, or add the line your syslinux.cfg manually (they both do the same thing, but want to verify that this is actually happening).
  12. That's not fair, how about "we don't recommend Fiji or Hawaii based AMD GPU's at this time"? Us Bonairre and Pitcairn's don't have these issues AND can be used for a VM as the only installed card. (I actually like Nvidia better, as I have never had an AMD card that did not die or had awful, awful, driver support). Had to add (no I did not make this, nor endorse the having of 8 arms):
  13. No worries, it's not too bad to get it once you read. The main thing (easy way) is to see if the under the about button (or whatever it's called, top right corner) says IOMMU:enabled. If so, you should be all set, if not, it is likely just a BIOS setting that needs to be changed to have this show up correctly. Find device ID, stub it (add line to syslinux.cfg), add that section to your XML (from that thread), boot up, all is well. There are many more details in that thread, and if you have specific questions, just ask, we're a friendly bunch. Edit: If you're feeling adventurous you could also update to 6.2 Beta 21 (as of writing this), stub the device (same as other method), and then you can use the Edit for your VM and just select the NIC from the available devices listed at the bottom of the page. This removes the having to manually edit your XML to add the NIC to the VM.
  14. Which driver did you install? Copy/paste the XML from clicking on the VM name, select edit XML. This will bring up the VM template (XML) that unRaid creates for you.
  15. This is what you need: Hardware that supports VT-d or AMD-Vi (looks good from sig) (if yes) The NIC to be in its own IOMMU group (if no you can try the ACS override patch, supposedly doesn't work on Skylake) (if we have a yes for both) Follow this sticky http://lime-technology.com/forum/index.php?topic=39638.0
  16. Could you provide your Win10 XML? I assume you previously had this working? Or you're just now trying to install Windows 10 and cannot?
  17. (I agree with the above, however) You could PXE boot the install disc, set dad's machine to network boot. How much time you have? http://lime-technology.com/forum/index.php?topic=31297.0 Don't ask grumpy for help, I don't think he'll answer anymore..
  18. What he said. Also since a lot of platforms have IGD (integrated graphics), this isn't a big issue as both Nvidia or AMD would work in that situation. The X99 chipset has no built-in IGD, but most Haswell/Skylake/Ivy ones (speaking Intel) do. Even a lot of server boards have some very generic (perfectly good for its use case) VGA output that would suffice for this use case. If your board has an old PCI slot, this may also suffice for Nvidia as a work around. eBay for a PCI video card for ~$10.
  19. I believe LT is now using this to determine (from the previous thread): Seems like this should give you the list of siblings: cat /sys/devices/system/cpu/*/topology/thread_siblings_list | sort -u Beautiful... Honestly didn't know that existed. Must be new. Well, we have added that under the system devices page for a future release. Awesome find!! It is recommended to pair the hyperthreaded core to the logical core, as they share the same cache. This should also cut down on latency vs having the HT core doing something else (VM/process).
  20. I'd check to see if there is a newer firmware for your drive. When I was looking at purchasing an SSD ~1 year ago, there was discussion about the Evo getting or needing updates for some issues. I cannot say that this is related, but certainly worth checking. I'd assume that there is something wrong with your drive or controller. Is it set to AHCI in the BIOS? I have a 512 Pro Samsung drive and do not any of these issues, and it is formatted to BTRFS.
  21. I've used this term, the wife disagrees! Use it for troubleshooting, basically: "Sets the IOMMU into passthrough mode for host devices. This reduces the overhead of the IOMMU for host owned devices, but also removes any protection the IOMMU may have provided again errant DMA from devices. If you weren't using the IOMMU before, there's nothing lost. Regardless of passthrough mode, the IOMMU will provide the same degree of isolation for assigned devices." http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-3-host.html For difficult cards it is recommended to try this, add to your syslinux.cfg as such: From this label unRAID OS (GUI) menu default kernel /bzimage append initrd=/bzroot change it to this: label unRAID OS (GUI) menu default kernel /bzimage append iommu=pt initrd=/bzroot Apply and then reboot your system.
  22. Unfortunately not, they have nothing to do with the issue. However I'm surprised that you're having issues with the items within their own IOMMU groups. 6.2 by default has IOMMU=PT set (if I'm not defining that perfectly, sorry, on phone). That may help your issue, or if not you can toggle that back to off. Which version are you currently using, 6.1.9?. You can enable this also in 6.1.9 by adding it to your syslinux.cfg file.
  23. I was just going to start talking about this (the manual part of this equation). Since this is the topic to discuss such a feature, I think this is a LARGE miss by LT for not wanting to implement this feature sooner. I strongly feel that without this feature we're losing out on making this statement truly useful "This enables users to leverage the same hardware providing NAS services to the home as a workstation, where they can do work, play media/games, and be creative with a high-performance computing platform." https://lime-technology.com/unraid-6-press-release-2/ This statement is still true, and not specifically misleading, however it glorifies the situation. Now no one said to me "Jeff, you should sell that other PC you have and virtualize everything into UnRAID, that will be the "cat's meow!"". However I and many others did just that, and the whole "one box to rule them all" concept is very good, and we're very close to being there. I wouldn't want to go back either, as I like everything in one computer, and I truly feel I'm using the processing power that I have (which was not very often prior as I don't game much, and an i5 for surfing the web is complete overkill). I do not run Pfsense or other things that are very much important to others here (even though I like the idea and may play in the future), however it is obnoxious to have to shutdown my primary PC in order to do maintenance to the array. Leading me to always having a netbook near, or use my phone (which works, but is not the best for completing tasks/typing commands). The quick and dirty solution was to add some form of GUI/X environment into 6.2 (which I have not used yet), and then open up the security risk of Firefox with admin rights (as I understand it) directly on the host. This solution still requires you to have an IGD for it to use, or sacrifice a video card for this output. This is where the "UnRAID as a guest" is very intriguing, however it is also not the path followed by many here (or they're not very vocal). However this solution still has customers buying UnRAID, so it is not exactly losing LT money. So, with all of that said. For a "primary" VM that we don't want to have shutdown unless the computer is actually rebooted or shutdown, what can we do to have another copy of libvirt/vfio/QEMU? I say "primary" as I have no issue with my other VM's being managed in the way they are now, however I wouldn't be against this being universally an option. Are the needed KVM related things in the kernel still accessible in this condition? (This is not my area of expertise) Could we place a secondary copy of bzimage to use if not, or use Fedora/Arch/whatever as a base image to do what we want here? Thinking out loud, but I think there are a decent amount of people who would benefit/appreciate this option. We should call this "Unassigned VM's"..
  24. I never touch a thing, and it does its thing very reliably. I cannot recall from my 1st run/initial setup, however you can see when/if its happened on the backend status page of Mythweb. Mine currently lists this: Last mythfilldatabase run started on Thu Mar 31 2016, 10:15 AM and ended on Thu Mar 31 2016, 10:15 AM. Successful. There's guide data until 2016-04-14 02:00:00 (14 day(s)). DataDirect Status: Your subscription expires on Tue Jul 19 2016 10:33 PM
  25. A little more investigation going against the CPU being bad. This guy had the same issue with XMP and his X99-SLI, one board worked great, the other did not (odd). https://hardforum.com/threads/ga-x99-sli-xmp-fail.1892493/ Also the TSC timer issue seems to be broken in some Gigabyte, and also Asus boards from my research. This may have always been the case (minus the lockup), however I never gave it much thought. Since I had all of these issues I'm much more critical of things looking out of place in the syslog. I think the pcierror's I received are also related to the XMP setting that I have always used on the other (exact same) MB, and not knowing it didn't work quite right with this one. Since disabling it, running the Memtest, and then booting up UnRAID (going on 12 hours here) that message has not returned! So I may be out of the woods soon, and my trigger finger may need to be relaxed from picking up the HX850i I have my eye on (currently on a pretty good sale) here: http://www.newegg.com/Product/Product.aspx?Item=N82E16817139083 Edit: Found an old syslog from my previously good working board.. Same exact TSC message, just never noticed. So with that, I think the CPU is fine. Thanks for listening! Feb 15 09:42:55 Server kernel: Switched APIC routing to physical flat. Feb 15 09:42:55 Server kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 15 09:42:55 Server kernel: TSC deadline timer enabled Feb 15 09:42:55 Server kernel: smpboot: CPU0: Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz (fam: 06, model: 3f, stepping: 02) Feb 15 09:42:55 Server kernel: Performance Events: PEBS fmt2+, 16-deep LBR, Haswell events, full-width counters, Intel PMU driver. Feb 15 09:42:55 Server kernel: ... version: 3 Feb 15 09:42:55 Server kernel: ... bit width: 48 Feb 15 09:42:55 Server kernel: ... generic registers: 4 Feb 15 09:42:55 Server kernel: ... value mask: 0000ffffffffffff Feb 15 09:42:55 Server kernel: ... max period: 0000ffffffffffff Feb 15 09:42:55 Server kernel: ... fixed-purpose events: 3 Feb 15 09:42:55 Server kernel: ... event mask: 000000070000000f Feb 15 09:42:55 Server kernel: x86: Booting SMP configuration: Feb 15 09:42:55 Server kernel: .... node #0, CPUs: #1 Feb 15 09:42:55 Server kernel: TSC synchronization [CPU#0 -> CPU#1]: Feb 15 09:42:55 Server kernel: Measured 228446458923 cycles TSC warp between CPUs, turning off TSC clock. Feb 15 09:42:55 Server kernel: tsc: Marking TSC unstable due to check_tsc_sync_source failed Feb 15 09:42:55 Server kernel: #2 #3 #4 #5 #6 #7 #8 #9 #10 #11