Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

About Keexrean

  • Rank
    Advanced Member
  • Birthday 08/24/1992


  • Location

Recent Profile Visitors

618 profile views
  1. Do you realize the issue by itself is already troubleshot and solved? And that it may be linked to some certain hardware and that your precise situation isn't a miracle solution for everyone else?
  2. @rachid596 I'ld advise you to read a bit more closely because I'm already on 6.8.1 and with the plugin, and the precise topic is about how it doesn't work with that setup. You're welcome.
  3. Hi everyone, and thanks for tagging me on that, else I would have missed it. So it seems 6.9 will indeed solve the issue, but as I prefer to wait for the stable release (because this server is kinda important, and I can't afford to run into beta-issues), I might still go barbarian on it in the mean time with some rmmod if I fail to sit tight until 6.9 get released.
  4. I get that pcie_acs_override=downstream and type1.allow_unsafe_interrupts=1 are kind of old-school methods now, and are just settings that were ported over from when I was running the same unraid install in an other box, an HP Z600 workstation, that definitively needed them to be able to passthrough something. (I also used to shuffle my PCIe devices a lot, so using that instead of targetted-stubbing was just some kind of comfort method, also I never used 'vfio-pci.ids=', always did 'pci-stub.ids=') I'll admit with no shame what so ever that I just took my drives, HBA and NICs out of the Z600, slapped them in the R720 I booted it with little to no care in early 2020. I might be part of this year's curse theme. I called that gracefull as in 'it went smoothly'. Power down the server, slap GPU in, boot, little xml edition of the VM to set multifunction and bring the sound device on the same virtual slot, vbios rom, VM boots right away, drivers install, CUDA acceleration work and passmark is in the range. And since it has been running stable for over a week, and through about 30 VM restarts, as long as it doesn't catch fire, I call that good enough. As for the NICs! Saying they were unconfigured, I used them at some point for Unraid's main networking, as it can be seen in This 'onboard' stock NIC did showed some unreliable behavior before, that I attributed to heat most likely, heavy usage + quite low airflow setting for my ears sanity (running the R720's fans between 7 and 12% speed, managed through an ipmitool script so disgusting that it would probably make you burp blood) And since no one seemed to be eager to even throw me a ball on that thread back then, I got fed up of the situation of unreliability, and decided to move (and upgrade) to a 2x10gbps base-T card with active cooling for the server's main networking, while I already had a SFP+ card that was dedicated to only back-to-back connections. Eth 4 5 6 and 7 still have their custom names in the network settings panel if I unstub them, but since have just stock-nothing settings dialed in, aren't linked to any network/bridge or anything, and all are "down". And I'm starting to think that MAYBE part of the card's unreliability back when I was using it as main NIC isn't all about heat, but lies deeper. It would indeed be interesting to see what insight the features of the 6.9 release would give on the issue. But I feel like whenever it's gonna bother me enough (which will probably happen before 6.9 release comes out), I'll go give some try to some sad-milking methods, like echo -n /sys/bus/pci/drivers/igb/0000:01:00.3 > unbind or just rmmod it.
  5. Hi! Nice. Still on unraid 6.8.1 right now, and I managed to passthrough a quadro card without stubbing with pci-stub.ids= like I used to, or using the VFIO-PCI Config plugin, it handled it gracefully with just pcie_acs_override=downstream and type1.allow_unsafe_interrupts=1. Though I'm facing an issue with onboard NIC and the VFIO-PCI Config plugin. Dell poweredge R720 here, using 2 other (better) network cards to actually connect the host around to network and back to back stuff, I would have liked to use all 4 'onboard' ports for some pf-sense-VM and routing tests. So I went on and used the VFIO-PCI Config plugin to stub all 4 ports (cause they each appear as their own subdevice) but as you can see on that screenshot, UNRAID for some reason keeps grabbing and using 2 of the 4 ports, for NO reason, since, atm, and at boot, no ethernet cables were even plugged in these, and they were all unconfigured and port-down interfaces. In network setting, showing in Mac address selection are eth6 and eth7, the two "derping" ports 01:00.0 and 01:00.1, but for some reason only one of the two is actually showing up at all as an available and configurable port (which it shouldn't at all, I don't want them grabbed by unraid) And check that sweet iDrac I can see how the onboard management sees the derping, please note they are in reverse order between IOMMU and unraid/idrac: port one, used to be eth4, good, not grabbed: port two, used to be eth5, good, not grabbed: port three, corresponding eth6, half a$$ grabbed, but no difference seen on idrac: port four, corresponding eth7, fully grabbed, and seen as functional I just don't get how it can fail, but since these devices are reset capable, it would be handy to know if there is a way to tell unraid "Bad! Bad hypervisor! Now sit!", and forcefully unload the devices without causing a kernel panic. If there is, that could be a life-saving option in the up-coming 6.9, to be able to tell unraid to just auto-unload some devices after booting, when they are known to hook themselves up for no reason. Please note, and I repeat, there are no cables plugged in these 4 ports, nothing configured to hook to them, and iDrac has its own dedicated port that isn't linked to the 'onboard' NIC (which in fact is on a mezzanine card.) If you could light my lantern there, I have no explanation of why Unraid is acting stubborn with this NIC, while handling GPU passthrough so blissfully on the other hand.
  6. Kinda necro-posting, but I would really enjoy this thing being a thing again, especially since the original docker repo vanished. I've seen that thread referenced above, but I really would prefer it being a docker or plugin than a direct modification to unraid (my install is already plagued, and I'm trying to kinda reverse it back to more streamlined solutions.). If someone's wondering "but for what use", I want to stop burning usb drives, and I don't like having drives at all in my ESXI box. If I could boot my ESXI server from my unraid's storage, it would be sweet. The VMs of my ESXI box are already stored on the unraid array anyways (10gbps back-to-back link between the two servers dedicated to not storing stuff in the esxi box.)
  7. Hi, I know, not latest release, but it's a very very minor inconvenience (that still can be a hassle if stars align for apocalypse with other events or server load), and since I'm not feeling like updating an otherwise stable (and task running) server, I won't try now on 6.8.3 My parity check is on a monthly schedule, set for last day. So far, never had an issue. BUT, yesterday it did the parity check (even the it's not the last day on the month), and did it again today. I think it just "derped" and forgot that July counts 31days. If that's not a big issue to me, a parity check is a lot of array load and could cause inconvenience if the schedule derp in more critical setups, and if the scheduler derped with the parity check, I'm curious on about what else it could derp with. That's all for me, have a great day!
  8. Eeeer, I was mostly seeing the Simpson reference, saying as it's yes effective but with an humorous twist, and I decided to ride that boat and show the next level of "refined jankiness", as in the daily use of electronic and PC parts wrapped in tape, but with the fanciness and luxurious taste in the choice of tape, presented with that naked-kaptonwrapped-SSD in that laptop I use as an IPMI dedicated console on a different VLAN than the rest.
  9. Shoot! Would have been usefull a month ago! (Also, there was in this thread a talk about Sandisk sometimes producing flash drives that fail to boot. Some people report having no issue of that sort, and I do have so sandisk flash drives that boot no issue, but also one that don't.)
  10. Well, the one of the left is basically what have ran my unraid server since 2017... I always broke plastic casings of USB keys (and this particular key also had a mobile workstation dropped on at some point before it became my server's boot device). Kapton type sounded to me like the less worst tape that would still be better than bare PCB (at least I was sure the adhesive wouldn't deteriorate). But you were warned! I said it was wrapped in kapton tape from the start, in the OG post! And you think kapton-wrapped USB is wrong? What about that!?
  11. Upgrades people, upgrades
  12. Well, from what we can assume it that pack of 3 seem to be a triplet twin situation. Probably that by-unit sold ones should still be unique. Damn 2€ saving wasn't worth it.
  13. And again... and again... procyon-diagnostics-20200714-0018.zip procyon-diagnostics-20200713-2010.zip
  14. That time it was the cheaper option, by unit it was 6.99, and the 3 pack was 18.99. Basically saving 2€ Edit: their narrow edge's etching is fully identical on all 3.