Jump to content

scorcho99

Members
  • Posts

    190
  • Joined

  • Last visited

Posts posted by scorcho99

  1. I have a 4tb parity disk that I'm going to let go, way back when it was used as a ntfs disk with some sensitive information as it was used as backup drive.

     

    I installed it in a test unraid machine with a 1tb drive, a 500gb drive and a few other smaller drives. Was the act of parity syncing effectively the same as having performed a secure erase? Nothing on there can be recovered right? I actually don't care array data, just the stuff that was on the disk when it was ntfs.

     

    Just a sanity check question here, it seems like the entire disk should have been filled with effectively random data or zeroes.

  2. 17 hours ago, rootPanda said:

    @scorcho99 Do you have the model numbers for the Startech cards you own that is confirmed working?

    My mistake, these cards aren't Startech, they are Syba. They aren't a lot of use for people here perhaps because they aren't 4 USB controllers. I basically use a bunch of them to get SATA ports and USB ports out of all my PCI-e 1x slots.

     

    https://www.amazon.com/gp/product/B00MVTB8TK/

  3. 7 hours ago, jbartlett said:

    I updated two of my three servers. On one, the VM Manager was set to disabled after the reboot. No apparent loss after re-enabling which is fortunate because I have ten custom tweaked VM's that I would find unpleasant to redo that work. On the other machine, a VM that had vanished sometime in the past was now available after the reboot.

     

    So, what's the best way to back up the VM settings? Stopping the array and backing up /mnt/user/system/libvirt/libvirt.img ?

    I've had this problem with earlier releases when changing other configuration options. Never really did figure it out, but it never seemed to happen when I wasn't messing with things so I wasn't to worried about it. In my case I never seemed to lose anything, just an annoyance since I had to reboot after re-enabling.

  4. 13 minutes ago, saarg said:

    It is the PCI address in the new method. The post you quoted says the new method uses addresses 😉

     

    You can also use PCI addresses with xen-pciback.hide in your syslinux.conf. I haven't really checked if it's still available, but it doesn't hurt to try it.

    syntax is as below.

    xen-pciback.hide=(08:00.0)(08:00.1)

     

    Ah, I see. I must be blind today.

  5. On 1/14/2020 at 8:50 AM, meep said:

    I too tried this recently, and it didn't work (6.8.0)

    vfio-pci-ids do work, but not in cases where user wants to pass through, say, one of 2 on-board USB controllers which have the same ID. 

    The new BIND method would be perfect for this as it uses addresses, rather than IDs. 

    Hmmm, I thought the BIND method did use PCI addresses not device ids. That's what I entered anyway, since that was the format that a forum post referenced. Maybe that is why it didn't work for me.

  6. On 1/2/2020 at 3:45 AM, sosdk said:

    I would like a cache setting that allows files to be on both cache pool and array.

    Second this, for same reason as Pauven.

     

    Its not as easy as it sounds though, because what happens when you change a file on the cache? How will that link up to the array?

  7. Another component I've found that is missing is unraid's qemu appears to not be compiled with opengl support. This isn't required to make gvt-g function, but there is a dmabuf function that requires it to link the VM display to the mediated card. You can use a client installed vnc, rdp...maybe looking glass, instead but its a real QoL improvement that isn't there.

     

    That said, given how unraid is built its not clear to me if openGL would actually work. The i915 driver must be loaded for this to work, but that might not be the only backend piece missing.

    • Like 1
  8. 9 minutes ago, techsperion said:

    I had this a couple of times. Have you tried a dummy HDMI plug? https://www.amazon.com/Emulator-Headless-Display(Fit-Headless-1920x1080-3840x2160-60Hz)-2Pack/dp/B074NNZYW4

     

    solved it for me!

    I don't think that is it because its plugged into a monitor the whole time. It also worked reliably until I started this whole i915 adventure. Although maybe there is something going on there.

  9. So this isn't really about plex, but people doing quick sync on unraid have a similar use case to me. I've found that once I modprobe the i915 my nvidia card just gives a black screen when passed through. If I don't modprobe after boot it works fine.

     

    In my case the igpu is primary, nvidia is secondary.

    I found if I pass the nvidia through (so its in use) and then modprobe, things are OK.

    Based on that I added nvidia to vfio pci ids in syslinux.cfg, and confirmed vfio was in use at boot. The modprobed but it still broke nvidia passthrough!

     

    This is perplexing to me. i915 shouldn't be able to taint the nvidia card in my mine, since its not a compatible driver. I assumed all the displays were reignited or something when the driver loaded and this was tainting the nvidia card somehow, but if its bound to vfio I don't see how it should be able to get at it at all.

     

    I think I can workaround this with scripts and a dummy vm to "protect" the nvidia card from i915 but that seems ridiculous.

     

    Anyone else seeing this behavior or is it just me? I've only tried nvidia on seabios vms so maybe that is a component.

  10. So every time I've gone looking for sata controllers there hasn't been much to find. We've had the asmedia 1061, the marvels that don't have a good reputation with unraid and then there's SAS controllers with flashed firmware that most people settle on. I've seen a newer version of the Asmedia but its still a 2 port version.

     

    I saw this pop up last month:

    https://www.amazon.com/dp/B07T3RMFFT

    https://www.amazon.com/dp/B07ST9CPND

     

    Some one in the reviews already claims to have tried it with Unraid.

     

    If you look the chip up on jmicron's site this is a Gen3 pci-e x 2 port SATA3 controller, giving you 5 ports. Seems like an interesting alternative in a land of currently sparse options. Pretty sure even if you installed these things in a 1x slot you'd have enough bandwidth to service 5 spinning rust drives.

     

    The question is if they are reliable.

  11. 9 minutes ago, flaggart said:

    Surprised there is not more activity from others in this thread, it would be amazing to be able to virtualise integrated graphics in the same way as the rest of the cpu and have multiple vms benefit from it.  I would ask you for more details to try myself but I didn't think something like this would be possible and now I am running xeons.

    Yeah, it doesn't seem like there is a lot of interest I guess. That's kind of why I set out to try it myself. I actually didn't think I was going to get it to work at all, much less transfer the setup to unraid. I'd given up on the idea since coffeelake support was initially not going to happen and then finally showed up in 5.1 kernel.

     

    Next step is to mod my bios to support increased aperture sizes. That's a large problem for anyone running this on consumer motherboards, the mediated GPUs take their slice from the GPU aperture pie, not the shared variable memory. While changing the aperture size is a supported option, most motherboards seem to just lock it to 256MB. This means I can only create 2 of the smallest type of virtual GPU at the moment.

    • Like 1
  12. So, I built a custom unraid 6.8.0-rc7 kernel adding support for these modules which was a first for me. I ended up relying on the Unraid-DVB scripts to do a lot of the heavy lifting on that front.

     

    After struggling with some configuration problems for the VM I now have GVT-g working on Unraid. At least, its working as well as it worked on Ubuntu. Lots more to investigate. But as a proof of concept, I think its fair to say that the kernel options listed above are the only thing really blocking this from working today.

    • Like 1
  13. That would be cool but I really don't think so. Everything I've read says displaylink are slave devices to the primary GPU. I think there is recent support these days to slave them off virtual GPUs in VMs (allowing you to turn a virtual GPU into something that displays on a real monitor) but that isn't what you're after.

     

  14. You won't be able to boot unraid from the controller*, typically USB boot requires the controller to be tied into the bios and only onboard controllers do that. Theoretically some one could make a USB controller with a boot ram on it but I have never seen or heard of one. Probably your best bet is to just buy another USB card that works better.

     

    *If you were really desperate and had enough USB keys lying around, it might be possible to use plop boot loader to make this work. plop lets you boot USB devices from normally unbootable controllers with the caveats that you must boot plop itself first somehow and that the USB connection during boot is brutally slow.

  15. I've added a module that doesn't load by itself on boot to the line in syslinux.cfg. But once I'm booted up if I rmmod the module, it says its not loading. That tells me I have failed somewhere. I have vfio pci ids listed and those seem to work so I'm not sure where I'm going wrong.

     

    Does some one have some example syntax where they are doing this successfully?

     

    Also confusingly, i915 says its not loaded either but lspci -v shows that "kernel module in use: i915" for the my intel igpu which seems contradictory.

×
×
  • Create New...