Keexrean

Members
  • Posts

    110
  • Joined

  • Last visited

Everything posted by Keexrean

  1. I... actually have the same controller, same behavior in UI, same result in CLI, same brand of drives, just Ironwolfs instead of Constellation, and 4TB instead of 500GB. And obviously the same question. I know it's a necro-post, but I would basically make a very similar post, and this one still needs an answer. Someone?
  2. I didn't. I just slapped in a shitty 20bucks 4x1gbps NIC in PCIe, because the Intel Mezzanine card for Dell servers is dogshit, and the broadcom one (which is the brand of most of my PCIe NICs which work flawlessly) for the R720 isn't cheap and an expense I don't want to do.
  3. Thanks! It's indeed reassuring, you had similar concerns to mines, or at least voiced them well. If indeed it won't become impossible to manage license without the UPC, it could be worth mentioning in the report since the wording of it actually makes it sound the exact absolute opposite. That's that "solved", but still only a part of my grief. Or at least it would be really solved if the thread wasn't ending in a bunch of unanswered questions, and if the UPC as a whole was just a module/plugin we can opt out/uninstall of and see disappear from the dashboard AND unloaded from the OS. I wouldn't care about the UPC if it was like the FTP server, or Mover, and you could just flick it off and be done with it. I wouldn't care about it if it was a pre-installed plugin you can just punt out of your system. I don't mind features I don't use, so long you actually don't try to shove them in my face. A legitimate security pop-up on Pr*xm*x concerning community repos, I'm fine with that, because it's a legitimate security concern. A pop-up/recurring banner/alert to install My server plugin or that kind of shenanigan, I'm through. An ugly space-consuming wart on the general header kinda counts also as "shoving it in my face". Especially when it just scatters around the server name, description, uptime and license, which used to be 4 informations in a list, clear to read, and now are just a design mess that won't let your eye off that big Sign In text assorted with its orange carbuncle Unraid logo. I will be updating to 6.10, but not with a 6.9.2 backup, with a PVE boot drive ready to go as an option, if I'm not feeling like actually going into the gui's CSS file to nuke the UPC out my view.
  4. From the report: "Starting with this release, it will be necessary for a new user to either sign-in with existing forum credentials or sign-up, creating a new account via the UPC in order to download a Trial key. All key purchases and upgrades are also handled exclusively via the UPC." From the updated wiki: Sounds like indeed, no, paid keys won't be handled externally. Or maybe Limetech just forgot to mention it's still a possibility in BOTH the report AND wiki? So from what I can read now, since 6.10, if you want to upgrade your key, or spin up a new server, the key management will be done online with an Unraid account, and the key will be indefinitely linked to your Unraid account. That's a no for me, and that does categorize as "phoning home for paid licenses" to me, and it doesn't sound optional at all if I believe the official report, and I quote "All key purchases and upgrades are also handled exclusively via the UPC.". So maybe it has never phoned home for paid license, but according to what I'm reading, it now will phone home, and that's part of my grief. I PAID for it. And it's gonna nag me for something I haven't signed for? I don't mind the security alert on Pr*xm*x for using community repos and stuff, because it's free software, and a justifiable alert, tbh. But you're telling me the damn thing that I paid for will give me pop-ups to install stuff I am thoroughly against? Does Unraid have an identity crisis or what, to pull preloaded MacAfee trial / Avast Free antivirus type of annoyance, basically bullying you into stuff you never agreed to on an OS you paid? If I wanted that type of crapware "flair", I would run windows server.
  5. Wait... if I read that correctly, it would mean that starting from 6.10, a Unraid server would not only be "phoning home" to check for key validity, but stay connected and/or associated with a forum account? I mean, if I want to not worry about my certificate renewal at least? And I have to link my key to a forum account? "All key purchases and upgrades are also handled exclusively via the UPC" Ah hell no, when I bought into unraid, it was because it was a one-time thing, with no strings attached. But you're telling me now that it will be hardware and data I own, under my roof, that will basically join an online cloud just to kindly ask the permission to run properly? I'm out. Just out. I'll see how it goes, but I'll be preparing to move over to an other OS in case I end up not liking it as much as I think I won't like it. There is a reason I don't use ubiquiti gear, or any IOT devices that rely on cloud services instead of local ones. There is a reason I use local accounts on windows and not M$ ones. The whole point of self hosting is to not depend on external entities for services or access to your own stuff. Unraid might just have lost its whole point there to me, and the user-friendliness won't be enough anymore to justify it to exist in my rack compared to Pr*xm*x which I'm very fine with too and already run several nodes of.
  6. Hi everyone! Some of you, like I do, don't actually have a room or a closet dedicated to their servers. Some of you, like I do, don't have a basement or a garage so they can tuck them away from the living space. Some of you, especially myself, even sleep in the same room as their open rack. Yup. And even if you don't, you may have environmental conditions, or noise requirements, that are way different than what the handful of cooling profile available in iDrac are initially made and tuned for. That's exactly for that reason I basically made myself an assortment of scripts to have my servers' cooling managed automatically, but from parameters I can fine tune. You will find in this github repository the script as I published it make to be standalone* and can be ran on unraid as-is as a cron job, eventually through the user-scripts plugin. *unlike the ones I actually use, that are very setup-specific (detailed in the README.md). The fancontrol.sh script basically gives you everything you need, I tried to detail it step by step in the commented notes, and should provide you every indication needed to use it. I would though advise you to read carefully the README.md, since it contains also warnings concerning the fact of running the script on the same machine it is controlling the cooling of. Mind the requirements also and annotations. If you're on iDrac Express for example, an upgrade might be doable on the cheap. If you want to go a bit more in depth with what you can do with iDrac on these generations of servers and IPMI tools, you can find here a little explicative summary of stuff and commands I know about iDrac, since these decade old resources are slowly disappearing and scattered across the web, or even only findable through web.archive.org 's wayback machine. I'm in no way an expert, and completely open to being schooled constructively. That's what forks and pullrequests are for on github. EDIT: The script have been improved a lot with the revisions 3 and 4. A guide for unraid beginners have been added.
  7. Hi people, here's the thing, I have a router that doesn't support VLAN (long story short, ISP router, proprietary GPON, can't be arsed to have an edge router). As such, my VLANs are different networks all together and really only exist on my L2 switch to segment the ports that should be on this or that network. Consider eth0 and eth1. None is configured in UNRAID to be VLAN aware. Again, VLANs are only setup as untagged anyways on the switch. eth0 is on VLAN1, ISP router network XX.XX.7.XX/24 . From this network, from any other computer that is on it, I can access every service running, and the Web GUI, working as intended and expected. eth1 used to be a direct connection, through a DAC cable, between my server and my workstation. Server is set to a static IP on XX.XX.42.XX/24, and so too is my workstation. With a simple DAC cable linking the two together back to back, yeay, 10GBPs transfer works just fine, can access dockers and stuff, all fine. If I plug the both of them on the switch, on ports that are assigned to VLAN42 (in which they are "alone", there is no router router on that network nor VLAN), they can ping each other, the workstation can SSH into the server, and I can even $smbclient into the shares of the workstation from the server! But no webgui, no talking to the dockers that are on host nor bridge, NOTHING. What gives?
  8. I don't know if you still have your issue at the moment, but I basically encountered a similar issue on 6.8.x with my R720 onboard NIC, it would just drop the eth link and no amout of pounding internet cables in RJ45 ports would fix it. I would have to restart the server several time to have what I would call a "good boot" where it doesn't loose connection in the 30minutes-1hour of uptime frametime. I ended up splurging on a PCIe Base-T NIC, that held flawlessly since. Then since 6.8.3, and worse since 6.9, it's my OTHER PCIe NIC, an SFP+ one that started to have this issue. And know what? NO one, in 10 months, cared enough to take a look at it. A deal breaking issue, for a server, to randomly loose connection, and no one gives a foo about it. SO my call would be for you to take a loss on a PCIe express slot and half a hundred quid+ for a PCIe NIC, or to switch to an other server distro than unraid 6.9, because at this rate it aint gonna be fixed before Unraid 42.0
  9. Hello hi, about 2 weeks into using unraid 6.9, trying 2 of my 3 known working SolarFlare SFN5122N cards, and I already had to reboot TWICE because Unraid just casually and randomly forgets a network card is supposed to actually fulfill a function. Basically, not other info than "ethx link is down, please check cables". Oh I've got cables. direct attach copper or fiber, which color, length or thickness do you fancy?, and even using the ones that never dropped a packet in back to back connection on my proxmox and workstation boxes, Unraid still aint no clue a cable is plugged in at all, because while windows and proxmox have visibly no issue seeing and using these NICs, apparently Unraid hasn't got its goggles on and a bad case of wrist stumps. Point is. I can't use the onboard NIC and have a NetXtreme II BCM57810 dual 10gbps for ethernet cabling, because unraid's being a dumbass with the onboard NIC, it only works when I passthrough the thing to a VM, great, I don't need that! I apparently now can't use my SFP+ Solarflare card either and will have to get an other sku/brand because Unraid can't deal with that either? And in 10months basically no one cares. That's awesome.
  10. @grphx @CrimsonBrew @Widget Nope. Unless trying some weird technics I have seen flung around, UEFI seem to be a no-go (at least on a R720) because on the onboard video chip, which you can disable for an add-in video card but then you loose Idrac's remote screen and access to bios at boot time. Though the importance of booting in legacy and UEFI isn't that big of a deal for virtualization purposes, since the boot method of the host has minimal impact on the vm's boot method, and legacy boot allows sometimes some peculiar hardware to work in passthrough when it derps in UEFI. Honestly, I don't think we're missing much. UEFI boot on unraid I think gets only usefull in some cases I heard about people having their server not being able to boot properly in legacy after some update, but that switching to UEFI solved the trick (on way more recent hardware than our beaters here, though).
  11. Okay, update. Since that fiasco, I basically dropped $$$ into a second add-in NIC, a double 10gbps RJ45 card (to be eth0), on top of the original 4 1gbps nic and the already added double 10gbps SFP+ card. And so far it was working great! Except today. After 72days of uptime, it crashed... but not the OG nic! The SFP nic is now the buggy one! unplugging and plugging back in didn't do a thing. Reboot fixed it... but I still find it surprisingly unpredictable and unsolvable behavior, that wouldn't be much of an issue for a desktop distro, but quite worrying on a server-oriented distro, AND on server hardware, mind you. (I wouldn't make much of this kind of issues on a desktop, it's really because it's a server that I take it seriously.) procyon-diagnostics-20201222-1913.zip
  12. Lucky you! I don't even have the Configure button! (Dell Poweredge R720)
  13. Do you realize the issue by itself is already troubleshot and solved? And that it may be linked to some certain hardware and that your precise situation isn't a miracle solution for everyone else?
  14. @rachid596 I'ld advise you to read a bit more closely because I'm already on 6.8.1 and with the plugin, and the precise topic is about how it doesn't work with that setup. You're welcome.
  15. Hi everyone, and thanks for tagging me on that, else I would have missed it. So it seems 6.9 will indeed solve the issue, but as I prefer to wait for the stable release (because this server is kinda important, and I can't afford to run into beta-issues), I might still go barbarian on it in the mean time with some rmmod if I fail to sit tight until 6.9 get released.
  16. I get that pcie_acs_override=downstream and type1.allow_unsafe_interrupts=1 are kind of old-school methods now, and are just settings that were ported over from when I was running the same unraid install in an other box, an HP Z600 workstation, that definitively needed them to be able to passthrough something. (I also used to shuffle my PCIe devices a lot, so using that instead of targetted-stubbing was just some kind of comfort method, also I never used 'vfio-pci.ids=', always did 'pci-stub.ids=') I'll admit with no shame what so ever that I just took my drives, HBA and NICs out of the Z600, slapped them in the R720 I booted it with little to no care in early 2020. I might be part of this year's curse theme. I called that gracefull as in 'it went smoothly'. Power down the server, slap GPU in, boot, little xml edition of the VM to set multifunction and bring the sound device on the same virtual slot, vbios rom, VM boots right away, drivers install, CUDA acceleration work and passmark is in the range. And since it has been running stable for over a week, and through about 30 VM restarts, as long as it doesn't catch fire, I call that good enough. As for the NICs! Saying they were unconfigured, I used them at some point for Unraid's main networking, as it can be seen in This 'onboard' stock NIC did showed some unreliable behavior before, that I attributed to heat most likely, heavy usage + quite low airflow setting for my ears sanity (running the R720's fans between 7 and 12% speed, managed through an ipmitool script so disgusting that it would probably make you burp blood) And since no one seemed to be eager to even throw me a ball on that thread back then, I got fed up of the situation of unreliability, and decided to move (and upgrade) to a 2x10gbps base-T card with active cooling for the server's main networking, while I already had a SFP+ card that was dedicated to only back-to-back connections. Eth 4 5 6 and 7 still have their custom names in the network settings panel if I unstub them, but since have just stock-nothing settings dialed in, aren't linked to any network/bridge or anything, and all are "down". And I'm starting to think that MAYBE part of the card's unreliability back when I was using it as main NIC isn't all about heat, but lies deeper. It would indeed be interesting to see what insight the features of the 6.9 release would give on the issue. But I feel like whenever it's gonna bother me enough (which will probably happen before 6.9 release comes out), I'll go give some try to some sad-milking methods, like echo -n /sys/bus/pci/drivers/igb/0000:01:00.3 > unbind or just rmmod it.
  17. Hi! Nice. Still on unraid 6.8.1 right now, and I managed to passthrough a quadro card without stubbing with pci-stub.ids= like I used to, or using the VFIO-PCI Config plugin, it handled it gracefully with just pcie_acs_override=downstream and type1.allow_unsafe_interrupts=1. Though I'm facing an issue with onboard NIC and the VFIO-PCI Config plugin. Dell poweredge R720 here, using 2 other (better) network cards to actually connect the host around to network and back to back stuff, I would have liked to use all 4 'onboard' ports for some pf-sense-VM and routing tests. So I went on and used the VFIO-PCI Config plugin to stub all 4 ports (cause they each appear as their own subdevice) but as you can see on that screenshot, UNRAID for some reason keeps grabbing and using 2 of the 4 ports, for NO reason, since, atm, and at boot, no ethernet cables were even plugged in these, and they were all unconfigured and port-down interfaces. In network setting, showing in Mac address selection are eth6 and eth7, the two "derping" ports 01:00.0 and 01:00.1, but for some reason only one of the two is actually showing up at all as an available and configurable port (which it shouldn't at all, I don't want them grabbed by unraid) And check that sweet iDrac I can see how the onboard management sees the derping, please note they are in reverse order between IOMMU and unraid/idrac: port one, used to be eth4, good, not grabbed: port two, used to be eth5, good, not grabbed: port three, corresponding eth6, half a$$ grabbed, but no difference seen on idrac: port four, corresponding eth7, fully grabbed, and seen as functional I just don't get how it can fail, but since these devices are reset capable, it would be handy to know if there is a way to tell unraid "Bad! Bad hypervisor! Now sit!", and forcefully unload the devices without causing a kernel panic. If there is, that could be a life-saving option in the up-coming 6.9, to be able to tell unraid to just auto-unload some devices after booting, when they are known to hook themselves up for no reason. Please note, and I repeat, there are no cables plugged in these 4 ports, nothing configured to hook to them, and iDrac has its own dedicated port that isn't linked to the 'onboard' NIC (which in fact is on a mezzanine card.) If you could light my lantern there, I have no explanation of why Unraid is acting stubborn with this NIC, while handling GPU passthrough so blissfully on the other hand.
  18. Kinda necro-posting, but I would really enjoy this thing being a thing again, especially since the original docker repo vanished. I've seen that thread referenced above, but I really would prefer it being a docker or plugin than a direct modification to unraid (my install is already plagued, and I'm trying to kinda reverse it back to more streamlined solutions.). If someone's wondering "but for what use", I want to stop burning usb drives, and I don't like having drives at all in my ESXI box. If I could boot my ESXI server from my unraid's storage, it would be sweet. The VMs of my ESXI box are already stored on the unraid array anyways (10gbps back-to-back link between the two servers dedicated to not storing stuff in the esxi box.)
  19. Hi, I know, not latest release, but it's a very very minor inconvenience (that still can be a hassle if stars align for apocalypse with other events or server load), and since I'm not feeling like updating an otherwise stable (and task running) server, I won't try now on 6.8.3 My parity check is on a monthly schedule, set for last day. So far, never had an issue. BUT, yesterday it did the parity check (even the it's not the last day on the month), and did it again today. I think it just "derped" and forgot that July counts 31days. If that's not a big issue to me, a parity check is a lot of array load and could cause inconvenience if the schedule derp in more critical setups, and if the scheduler derped with the parity check, I'm curious on about what else it could derp with. That's all for me, have a great day!
  20. Eeeer, I was mostly seeing the Simpson reference, saying as it's yes effective but with an humorous twist, and I decided to ride that boat and show the next level of "refined jankiness", as in the daily use of electronic and PC parts wrapped in tape, but with the fanciness and luxurious taste in the choice of tape, presented with that naked-kaptonwrapped-SSD in that laptop I use as an IPMI dedicated console on a different VLAN than the rest.
  21. Shoot! Would have been usefull a month ago! (Also, there was in this thread a talk about Sandisk sometimes producing flash drives that fail to boot. Some people report having no issue of that sort, and I do have so sandisk flash drives that boot no issue, but also one that don't.)
  22. Well, the one of the left is basically what have ran my unraid server since 2017... I always broke plastic casings of USB keys (and this particular key also had a mobile workstation dropped on at some point before it became my server's boot device). Kapton type sounded to me like the less worst tape that would still be better than bare PCB (at least I was sure the adhesive wouldn't deteriorate). But you were warned! I said it was wrapped in kapton tape from the start, in the OG post! And you think kapton-wrapped USB is wrong? What about that!?
  23. Well, from what we can assume it that pack of 3 seem to be a triplet twin situation. Probably that by-unit sold ones should still be unique. Damn 2€ saving wasn't worth it.