scorcho99

Members
  • Posts

    190
  • Joined

  • Last visited

Everything posted by scorcho99

  1. Huh, I thought the mover would move them to the main storage array. I'll play around with it tonight but how do I set the cache pool up for single disks? I was under the impression it just does a btrfs mirror array by default. I've seen command line directions for changing the array type to striped but not to separate disks.
  2. Great! Just to be clear, I also want a parity protected storage array. But I'd also like the two additional cache disks to have user shares extended across them. Basically, I want to store vms on the cache share but be able to move them from a ssd to a hdd or back in the cache pool without updating the VMs drive location.
  3. I gather by default at least multiple disks in the cache pool go together as a btrfs raid 1 (btrfs style raid 1 anyway) disk. I've seen its possible to convert this to a striped disk as well. Another idea was presented in the linked thread of having the cache pool disks be in a single disk / jbod mode instead. Basically, it would be like the unraid storage array only without parity protection. Did this ever get implemented? And if so how could I set it up?
  4. Well, I gather it works with BTFS raid 1 mirroring and people have turned on 0 striping. But what Jonp was suggesting was having it behave basically the same as the main storage array does currently except without parity protection (user shares would extend across the different disks, but the disks would actually be separate rather than existing in an array). I haven't seen anything to suggest that mode works is all.
  5. Actually, I guess all I would really need is a JBOD cache pool to do this. It seems jonp wanted this in addition to the default raid1 and optional raid0 based on what I read here: Did this ever become supported? Or is it even possible to setup?
  6. Thanks for the info on user shares with unassigned devices. That kind of throws cold water on the idea though!
  7. I have a 256GB SSD and another larger hard drive. Unraid stores its VMs on a cache drive, or it can anyway. Lets just call the unraid cache "VM storage" for now. What I'd really like is for the unraid VM storage to be an array of the hard disk with the SSD acting as a cache of most frequently used data, kind of like Intel Smart Response. There's a flash cache or something tech in windows that does the same thing. This doesn't seem like its easy to do but I was thinking what be almost as good would be to mount both disks with a VM disk user share on both. And then stored the most frequently used VMs on the SSD. The advantage here would be moving the disks between drives I wouldn't need to update the VMs disk location. I think I would need unassigned devices to do this. The cache pools seem like they just combine the disks and wouldn't work for this. So is that possible? Load the SSD in unassigned devices and just create a user share on it and the cache disk. Ideally I'd have a mover script, really just like the one that exists except it would: Move most frequently accessed files to the SSD and less accessed to the HDD. I could probably do this manually though and it wouldn't be to much of a burden.
  8. I currently have an esxi configuration where the virtual machines that run on it are all stored on an NFS share. The NFS server runs a script where all the VMs are shutdown and the file system is snapshotted in a clean state. Then I start things back up and run a backup of the VMs from the snapshot. I like this configuration a lot because the VMs can keep chugging along while a backup runs. There's a lot of other things I don't like about ESXI and I'm using unraid for general file storage already so it would be nice if I could move to an unraid only solution in the future. There doesn't seem to be a snapshot of the file system option in unraid so I feel it would be a good bit harder to do. Either the same snapshot system on the cache drives or the ability to mount an external/remote NFS share and store the VM files on that share would get the job done for me. Is anything like this already possible?
  9. How do you guys usually do this? I can do it from linux or Windows but what is the best way? GPU-Z?
  10. Just a note, in my travels I found that fresco logic usb3 cards have really nasty reset issues. I'd recommend a different card if that is the chipset yours uses.
  11. Can you get nouveau driver to work? I got my GT 710 to work with linux mint on esxi the other day, but I couldn't get the proprietary drivers to work. I think you should be able to get this to work with kvm though as it has far more options.
  12. I've wondered this question myself. I know esxi you can (optionally) flag specific cores to be used, and I think the concept of cpu pinning used with KVM should be able to do this here as well. In theory if you only need a quad in the VM you can eliminate the CCX crosstalk within that VM at least.
  13. Sure. The problem you'll run into is that you can't usually boot off an add-on card USB port, I think it would need its own boot option ROM which I've never seen one have. You CAN get around this though, if you're willing to endure extra slow unraid boot. Here's what try I'd do: 1) Buy a cheap VIA USB2 PCI card with 5 ports (I only suggest this card because I've used this particular kind of card like this before) 2) Install plop boot manager onto a bootable USB flash drive 3) Plug that flash drive into the motherboard USB ports 4) Install the PCI card, plug Unraid into that 5) Configure bios boot from the plop boot flash drive 6) Once plop manager comes up, configure it to ignore its own device and then tell it to boot from USB 7) Unraid should now boot, brutally slow, but it should boot. Now you can passthrough the onboard USB controller to another VM Unnecessarily following up on my first idea, I googled around and it looks like legacy PCI passthrough is deprecated and now unsupported but I think it can be enabled. I've never heard of anyone doing it myself with KVM.
  14. I seem to remember that KVM passthrough doesn't have support for plain old/legacy PCI device passthrough. I saw it in a forum somewhere and thought it was a bummer. It is possible to do, ESXi does it. All devices under a PCI bridge must go together which in practice almost always means every PCI slot must go to the same VM...but it is possible and does work on ESXi. But for whatever reason, KVM decided not to implement it. I can understand why, its of much more limited use than pci-e passthrough. Maybe things have changed though or unraid is better about this?
  15. VT-d has been available on garden variety i5 non-Ks since Sandybridge at least. I think starting with i5 4000 series at least some of the K models also had it. Skylake is even better here than you guys are letting on though. The lowly Pentium G4400 and a Skylake celeron no one can actually find even have it. I guess Intel decided not to be so greedy this go around with at least one feature.
  16. Yes, NFS datastores can be accessed by ESXi so you just need to present a NFS share within unraid. The problem is that you'll need an additional datastore to house the initial unraid VMs files on because unraid can't present its NFS share to ESXi until its running and it can't run until ESXi starts the VM. I run this configuration at home, its pretty annoying because the local datastore only has to house a few megs of VM configuration files and a tiny plop boot loader ISO to get things up and running.
  17. What was the usb reset issue people were having? I'm not seeing it with my ESXi6u1+6.1.7 combination I don't think. The guide says you can't do spindown or smart with RDM...mine seem to be working. I remember I had to create the RDM files using a different option, set the appropriate SCSI controller in the VM and then hack the ESXi startup script so it wouldn't poll the drives itself every hour or so causing them to spin up for no reason. I used to get all this junk in the direct console when it was running passed through disks but that seems to have disappeared with 6.1.7.
  18. Well, that blacklist option doesn't seem to do anything. Does anyone who knows linux and unraid better than I know of a way to prevent the xhci / usb3 driver from ever loading? Its really the last idea I have after pci-stub failed. I don't need unraid to use any of the USB3 controllers at all. I'm wondering if others have problems? Can anyone test installing a USB3 device to a USB3 port (BEFORE turning on the host) that is passed through to a VM and tell me if the device appears OK in the VM? I ran a lot of tests and everything works perfectly if I have USB2 devices connected. I tested with external hard drive, flash drives, usb mouse and usb sound cards. On both the pci-e card and on the onboard controllers. I even used an extension cable to turn a USB3 device that had caused the problem into a 2.0 one and it worked fine. Its only USB3 devices that mess things up. Between that and the inability to get old PCI (classic) devices working I'm pretty much stuck on this project.
  19. So I haven't played with this much recently. I still have that problem. I got ESXI working and tried to reproduce there and I could not. Both onboard and NEC card seem to work prefectly for me once I set it up correctly. I noticed that when VMs are down/off that use the passed through controller with the samsung external harddrive with ESXI the activity light remains off and the drive is off (this drive is weird and only turns on if it detects it is connected or something. With unraid, the drive shuts off when a VM shuts down but then it will come back on after. This leaves me leaning towards the idea that the host screwing around with the controllers and leaving them in a gummed up state so that they don't always work. I found a post about some one doing passthrough with an AMD videocard and having the passthrough messed up after he loaded the AMD drivers on the host (he had two AMD cards) even though he had assigned the device to pci-stub. He uninstalled the radeon driver from the host and it solved it. Its not the same thing but its sort of close. My motherboard is gigabyte so that weird on/off charge feature that has no options in the bios also seems like something I could blame. All I can think now is to try blacklisting the xhci module. I have yet to see the problem with only USB2 devices attached although I've only tried a few. I think I just need to add: append xhci_hcd.blacklist=yes initrd=bzroot to syslinux.cfg
  20. hmm didn't know about this. What I have is: PCI-E 16 PCI-E 1 PCI-E 16 PCI Should I be ok with one x16 video card and then either one x1->x16 video card OR PCI video card? If you only have one PCI slot then this shouldn't even be a concern. One bridge to one PCI slot means there really shouldn't be any effective difference. So you could install something in all the slots and pass each through to different vms. PCI-e devices communicate some kind of id so that the hypervisor can know which device to talk too. Whereas the PCI bus doesn't have anything like it so the hypervisor just sends communication to the entire bridge and can't split up devices behind that bridge to different VMs. That's my kind of crude understanding of why this limitation exists. In theory a motherboard manufacturer could put multiple PCI bridges in for each PCI slot to get around this. In practice, they're never going to do that since it increases cost and is usually of no benefit.
  21. NO !! You definitely don't want a card that requires auxiliary power. The reason they have that power connection is that the card requires more power than the PCIe bus can provide. Not really a factor here, however, as inexpensive cards like you're looking for aren't going to have this. I agree the best overall bet is to get a low power card to use with the adapter. I think the 6450/5450 cards top out at like 20 watts load or something. I suspect nvidia has something similar. That said, I ran that 8800GT card through a 16x -> 1x pci-e adapter and that had aux power and wasn't a low power card by any means. And it seemed to work fine. I couldn't say why though. I'd guess either the aux power made up for the shortfall or more likely in practice (at least some) motherboards actually provide the full 16x spec power to all the slots.
  22. Regarding PCI cards: The IOMMU specs essentially say that all PCI devices behind a bridge must be passed through together. So if you have 3 PCI slots they are almost certainly behind one bridge and therefore can only be passed together to one VM. So only buy 1! I can't even get anything behind a PCI bridge to work with unraid/kvm passthrough on my board for some reason. It complains that the devices are in use/busy without the bridge included. But if I try to passthrough the associated bridge it tells me it doesn't exist (even though its listed in lspci). I even tried binding everything to pci-stub in the kernel options.
  23. Some are (I know there is a kind of expensive startech one that bills itself doing this) most don't seem to take it into consideration. Depending upon your case and card, an easy ghetto mod to use with this adapter is to use some stacked motherboard stand offs and possibly washers to artificially extend the backplate mounting screw hole to accommodate the additional height from the adapter. Then you can even use a full height card as long as your case has a little extra room above the cards. Another thing to consider is supposedly 1x pci-e is only spec'ed for 30watt or something but the 16x graphics card slots are 70ish. A lot of the adapters now come with some kind of molex connector to maybe fix this but I have my doubts. I did run a 8800GT way back with an adapter and I sort of think that the extra graphics card power plug that was required by that card anyway was enough. I only ran 3dmark tests with it and besides the expected performance hit I didn't have trouble with my testing. Honestly, I suspect all the ports provide full power these days or something but I really have no idea. If you're using a low end card I just don't think this is going to matter either way. This was kind of an uncommon idea back when I was playing around with it, but when bitcoin mining became huge there were a lot of people running boatloads of graphics cards in a single machine so this became pretty popular.
  24. Traditionally the 1x cards have been so overpriced they made no sense to buy. Before the adapters became ubiquitous I saw some people had taken to sawing off the extra card length so that it would fit in a pci-e 1x slot or breaking the back of a 1x slot on a motherboard off so that the card could hang out the back. Believe it or not, both "solutions" were reported to work as expected.