Jump to content

Keexrean

Members
  • Posts

    110
  • Joined

  • Last visited

Posts posted by Keexrean

  1. On 11/14/2021 at 2:15 AM, mojotaker said:

    PLEASSEEEEEEE How did you fix it. I have this problem with fresh installs also

    Screenshot 2021-11-13 191417.png

     

     

    I didn't. I just slapped in a shitty 20bucks 4x1gbps NIC in PCIe, because the Intel Mezzanine card for Dell servers is dogshit, and the broadcom one (which is the brand of most of my PCIe NICs which work flawlessly) for the R720 isn't cheap and an expense I don't want to do.

    • Like 1
  2. Hi everyone!

     

    Some of you, like I do, don't actually have a room or a closet dedicated to their servers.

    Some of you, like I do, don't have a basement or a garage so they can tuck them away from the living space.

     

    Some of you, especially myself, even sleep in the same room as their open rack. Yup.

     

     

    And even if you don't, you may have environmental conditions, or noise requirements, that are way different than what the handful of cooling profile available in iDrac are initially made and tuned for.

    That's exactly for that reason I basically made myself an assortment of scripts to have my servers' cooling managed automatically, but from parameters I can fine tune.

     

    You will find in this github repository the script as I published it make to be standalone* and can be ran on unraid as-is as a cron job, eventually through the user-scripts plugin.

    *unlike the ones I actually use, that are very setup-specific (detailed in the README.md).

     

    The fancontrol.sh script basically gives you everything you need, I tried to detail it step by step in the commented notes, and should provide you every indication needed to use it.

    I would though advise you to read carefully the README.md, since it contains also warnings concerning the fact of running the script on the same machine it is controlling the cooling of.

    Mind the requirements also and annotations. If you're on iDrac Express for example, an upgrade might be doable on the cheap.

     

     

    If you want to go a bit more in depth with what you can do with iDrac on these generations of servers and IPMI tools, you can find here a little explicative summary of stuff and commands I know about iDrac, since these decade old resources are slowly disappearing and scattered across the web, or even only findable through web.archive.org 's wayback machine.

     

     

     

    I'm in no way an expert, and completely open to being schooled constructively. That's what forks and pullrequests are for on github.

     

     

    EDIT:

    The script have been improved a lot with the revisions 3 and 4. A guide for unraid beginners have been added.

  3. Hi people, here's the thing,

     

    I have a router that doesn't support VLAN (long story short, ISP router, proprietary GPON, can't be arsed to have an edge router).

     

    As such, my VLANs are different networks all together and really only exist on my L2 switch to segment the ports that should be on this or that network.

     

    Consider eth0 and eth1.

    None is configured in UNRAID to be VLAN aware. Again, VLANs are only setup as untagged anyways on the switch.

     

     

    eth0 is on VLAN1, ISP router network XX.XX.7.XX/24 . From this network, from any other computer that is on it, I can access every service running, and the Web GUI, working as intended and expected.

     

     

     

    eth1 used to be a direct connection, through a DAC cable, between my server and my workstation.

     

    Server is set to a static IP on XX.XX.42.XX/24, and so too is my workstation.

     

    With a simple DAC cable linking the two together back to back, yeay, 10GBPs transfer works just fine, can access dockers and stuff, all fine.

     

    If I plug the both of them on the switch, on ports that are assigned to VLAN42 (in which they are "alone", there is no router router on that network nor VLAN), they can ping each other, the workstation can SSH into the server, and I can even $smbclient into the shares of the workstation from the server!

    But no webgui, no talking to the dockers that are on host nor bridge, NOTHING.

     

    What gives?

  4. Hello hi, about 2 weeks into using unraid 6.9, trying 2 of my 3 known working SolarFlare SFN5122N cards, and I already had to reboot TWICE because Unraid just casually and randomly forgets a network card is supposed to actually fulfill a function.

     

    Basically, not other info than "ethx link is down, please check cables".

    Oh I've got cables. direct attach copper or fiber, which color, length or thickness do you fancy?, and even using the ones that never dropped a packet in back to back connection on my proxmox and workstation boxes, Unraid still aint no clue a cable is plugged in at all, because while windows and proxmox have visibly no issue seeing and using these NICs, apparently Unraid hasn't got its goggles on and a bad case of wrist stumps.

    Point is.

    I can't use the onboard NIC and have a NetXtreme II BCM57810 dual 10gbps for ethernet cabling, because unraid's being a dumbass with the onboard NIC, it only works when I passthrough the thing to a VM, great, I don't need that!

    I apparently now can't use my SFP+ Solarflare card either and will have to get an other sku/brand because Unraid can't deal with that either?

     

    And in 10months basically no one cares. That's awesome.

  5. @grphx @CrimsonBrew @Widget Nope.
    Unless trying some weird technics I have seen flung around, UEFI seem to be a no-go (at least on a R720) because on the onboard video chip, which you can disable for an add-in video card but then you loose Idrac's remote screen and access to bios at boot time.

    Though the importance of booting in legacy and UEFI isn't that big of a deal for virtualization purposes, since the boot method of the host has minimal impact on the vm's boot method, and legacy boot allows sometimes some peculiar hardware to work in passthrough when it derps in UEFI.

    Honestly, I don't think we're missing much.
    UEFI boot on unraid I think gets only usefull in some cases I heard about people having their server not being able to boot properly in legacy after some update, but that switching to UEFI solved the trick (on way more recent hardware than our beaters here, though).

  6. Okay, update. Since that fiasco, I basically dropped $$$ into a second add-in NIC, a double 10gbps RJ45 card (to be eth0), on top of the original 4 1gbps nic and the already added double 10gbps SFP+ card.

     

    And so far it was working great!
    Except today.

    After 72days of uptime, it crashed... but not the OG nic! The SFP nic is now the buggy one! unplugging and plugging back in didn't do a thing.

     

    Reboot fixed it... but I still find it surprisingly unpredictable and unsolvable behavior, that wouldn't be much of an issue for a desktop distro, but quite worrying on a server-oriented distro, AND on server hardware, mind you.

    (I wouldn't make much of this kind of issues on a desktop, it's really because it's a server that I take it seriously.)

    procyon-diagnostics-20201222-1913.zip

  7. I get that pcie_acs_override=downstream and type1.allow_unsafe_interrupts=1 are kind of old-school methods now, and are just settings that were ported over from when I was running the same unraid install in an other box, an HP Z600 workstation, that definitively needed them to be able to passthrough something.

    (I also used to shuffle my PCIe devices a lot, so using that instead of targetted-stubbing was just some kind of comfort method, also I never used 'vfio-pci.ids=', always did 'pci-stub.ids=')

     

    I'll admit with no shame what so ever that I just took my drives, HBA and NICs out of the Z600, slapped them in the R720 I booted it with little to no care in early 2020. I might be part of this year's curse theme.

     

    I called that gracefull as in 'it went smoothly'. Power down the server, slap GPU in, boot, little xml edition of the VM to set multifunction and bring the sound device on the same virtual slot, vbios rom, VM boots right away, drivers install, CUDA acceleration work and passmark is in the range. And since it has been running stable for over a week, and through about 30 VM restarts, as long as it doesn't catch fire, I call that good enough.

     

     

    As for the NICs! Saying they were unconfigured, I used them at some point for Unraid's main networking, as it can be seen in 

     

    This 'onboard' stock NIC did showed some unreliable behavior before, that I attributed to heat most likely, heavy usage + quite low airflow setting for my ears sanity (running the R720's fans between 7 and 12% speed, managed through an ipmitool script so disgusting that it would probably make you burp blood)

    And since no one seemed to be eager to even throw me a ball on that thread back then, I got fed up of the situation of unreliability, and decided to move (and upgrade) to a 2x10gbps base-T card with active cooling for the server's main networking, while I already had a SFP+ card that was dedicated to only back-to-back connections.

     

     

    Eth 4 5 6 and 7 still have their custom names in the network settings panel if I unstub them, but since have just stock-nothing settings dialed in, aren't linked to any network/bridge or anything, and all are "down".

    And I'm starting to think that MAYBE part of the card's unreliability back when I was using it as main NIC isn't all about heat, but lies deeper. It would indeed be interesting to see what insight the features of the 6.9 release would give on the issue.

     

    But I feel like whenever it's gonna bother me enough (which will probably happen before 6.9 release comes out), I'll go give some try to some sad-milking methods, like echo -n /sys/bus/pci/drivers/igb/0000:01:00.3 > unbind or just rmmod it.

    • Like 1
  8. Hi! Nice. Still on unraid 6.8.1 right now, and I managed to passthrough a quadro card without stubbing with pci-stub.ids= like I used to, or using the VFIO-PCI Config plugin, it handled it gracefully with just pcie_acs_override=downstream and type1.allow_unsafe_interrupts=1.

     

    Though I'm facing an issue with onboard NIC and the VFIO-PCI Config plugin.


    Dell poweredge R720 here, using 2 other (better) network cards to actually connect the host around to network and back to back stuff, I would have liked to use all 4 'onboard' ports for some pf-sense-VM and routing tests.

     

    So I went on and used the VFIO-PCI Config plugin to stub all 4 ports (cause they each appear as their own subdevice)image.png.2ff7adab4dbcc1d8e4cedc4eb0be251d.png 

    but as you can see on that screenshot, UNRAID for some reason keeps grabbing and using 2 of the 4 ports, for NO reason, since, atm, and at boot, no ethernet cables were even plugged in these, and they were all unconfigured and port-down interfaces.

     

     

    In network setting, showing in Mac address selection are eth6 and eth7, the two "derping" ports 01:00.0 and 01:00.1, but for some reason only one of the two is actually showing up at all as an available and configurable port (which it shouldn't at all, I don't want them grabbed by unraid)

    image.thumb.png.313a48ec58c02aa3d3f5b9a84e8f28e2.png

     

     

    And check that sweet iDrac I can see how the onboard management sees the derping, please note they are in reverse order between IOMMU and unraid/idrac:
    port one, used to be eth4, good, not grabbed:

    image.thumb.png.f9e7f519d8b9ce1a563dfd606f1a3575.png

    port two, used to be eth5, good, not grabbed:

    image.thumb.png.ec74721ee02a2b41dbc95bfb4156a354.png

    port three, corresponding eth6, half a$$ grabbed, but no difference seen on idrac:

    image.png.4ecef2fc80412f78616b25d096c8cdae.png

    port four, corresponding eth7, fully grabbed, and seen as functional

    image.png.a684d419e69d0a65431cc6c0b097e668.png

     

     

    I just don't get how it can fail, but since these devices are reset capable, it would be handy to know if there is a way to tell unraid "Bad! Bad hypervisor! Now sit!", and forcefully unload the devices without causing a kernel panic.

    If there is, that could be a life-saving option in the up-coming 6.9, to be able to tell unraid to just auto-unload some devices after booting, when they are known to hook themselves up for no reason.

     

    Please note, and I repeat, there are no cables plugged in these 4 ports, nothing configured to hook to them, and iDrac has its own dedicated port that isn't linked to the 'onboard' NIC (which in fact is on a mezzanine card.)

     

    If you could light my lantern there, I have no explanation of why Unraid is acting stubborn with this NIC, while handling GPU passthrough so blissfully on the other hand.

    • Haha 1
  9. Kinda necro-posting, but I would really enjoy this thing being a thing again, especially since the original docker repo vanished. I've seen that thread referenced above, but I really would prefer it being a docker or plugin than a direct modification to unraid (my install is already plagued, and I'm trying to kinda reverse it back to more streamlined solutions.).

    If someone's wondering "but for what use", I want to stop burning usb drives, and I don't like having drives at all in my ESXI box. If I could boot my ESXI server from my unraid's storage, it would be sweet. The VMs of my ESXI box are already stored on the unraid array anyways (10gbps back-to-back link between the two servers dedicated to not storing stuff in the esxi box.)

  10. Eeeer, I was mostly seeing the Simpson reference, saying as it's yes effective but with an humorous twist, and I decided to ride that boat and show the next level of "refined jankiness", as in the daily use of electronic and PC parts wrapped in tape, but with the fanciness and luxurious taste in the choice of tape, presented with that naked-kaptonwrapped-SSD in that laptop I use as an IPMI dedicated console on a different VLAN than the rest.

     

     

  11. Well, the one of the left is basically what have ran my unraid server since 2017... I always broke plastic casings of USB keys (and this particular key also had a mobile workstation dropped on at some point before it became my server's boot device).

    Kapton type sounded to me like the less worst tape that would still be better than bare PCB (at least I was sure the adhesive wouldn't deteriorate).

     

    But you were warned! I said it was wrapped in kapton tape from the start, in the OG post!

     

    And you think kapton-wrapped USB is wrong? What about that!?
    image.thumb.png.b3923fb3a7d66f873dd870f7ac0f7d62.png
     

  12. Well, from what we can assume it that pack of 3 seem to be a triplet twin situation.
    Probably that by-unit sold ones should still be unique. Damn 2€ saving wasn't worth it.

  13. That time it was the cheaper option, by unit it was 6.99, and the 3 pack was 18.99. Basically saving 2€

     

    Edit: their narrow edge's etching is fully identical on all 3.

×
×
  • Create New...