Keexrean

Members
  • Posts

    109
  • Joined

  • Last visited

Converted

  • Location
    France

Recent Profile Visitors

954 profile views

Keexrean's Achievements

Apprentice

Apprentice (3/14)

6

Reputation

  1. I didn't. I just slapped in a shitty 20bucks 4x1gbps NIC in PCIe, because the Intel Mezzanine card for Dell servers is dogshit, and the broadcom one (which is the brand of most of my PCIe NICs which work flawlessly) for the R720 isn't cheap and an expense I don't want to do.
  2. Thanks! It's indeed reassuring, you had similar concerns to mines, or at least voiced them well. If indeed it won't become impossible to manage license without the UPC, it could be worth mentioning in the report since the wording of it actually makes it sound the exact absolute opposite. That's that "solved", but still only a part of my grief. Or at least it would be really solved if the thread wasn't ending in a bunch of unanswered questions, and if the UPC as a whole was just a module/plugin we can opt out/uninstall of and see disappear from the dashboard AND unloaded from the OS. I wouldn't care about the UPC if it was like the FTP server, or Mover, and you could just flick it off and be done with it. I wouldn't care about it if it was a pre-installed plugin you can just punt out of your system. I don't mind features I don't use, so long you actually don't try to shove them in my face. A legitimate security pop-up on Pr*xm*x concerning community repos, I'm fine with that, because it's a legitimate security concern. A pop-up/recurring banner/alert to install My server plugin or that kind of shenanigan, I'm through. An ugly space-consuming wart on the general header kinda counts also as "shoving it in my face". Especially when it just scatters around the server name, description, uptime and license, which used to be 4 informations in a list, clear to read, and now are just a design mess that won't let your eye off that big Sign In text assorted with its orange carbuncle Unraid logo. I will be updating to 6.10, but not with a 6.9.2 backup, with a PVE boot drive ready to go as an option, if I'm not feeling like actually going into the gui's CSS file to nuke the UPC out my view.
  3. From the report: "Starting with this release, it will be necessary for a new user to either sign-in with existing forum credentials or sign-up, creating a new account via the UPC in order to download a Trial key. All key purchases and upgrades are also handled exclusively via the UPC." From the updated wiki: Sounds like indeed, no, paid keys won't be handled externally. Or maybe Limetech just forgot to mention it's still a possibility in BOTH the report AND wiki? So from what I can read now, since 6.10, if you want to upgrade your key, or spin up a new server, the key management will be done online with an Unraid account, and the key will be indefinitely linked to your Unraid account. That's a no for me, and that does categorize as "phoning home for paid licenses" to me, and it doesn't sound optional at all if I believe the official report, and I quote "All key purchases and upgrades are also handled exclusively via the UPC.". So maybe it has never phoned home for paid license, but according to what I'm reading, it now will phone home, and that's part of my grief. I PAID for it. And it's gonna nag me for something I haven't signed for? I don't mind the security alert on Pr*xm*x for using community repos and stuff, because it's free software, and a justifiable alert, tbh. But you're telling me the damn thing that I paid for will give me pop-ups to install stuff I am thoroughly against? Does Unraid have an identity crisis or what, to pull preloaded MacAfee trial / Avast Free antivirus type of annoyance, basically bullying you into stuff you never agreed to on an OS you paid? If I wanted that type of crapware "flair", I would run windows server.
  4. Wait... if I read that correctly, it would mean that starting from 6.10, a Unraid server would not only be "phoning home" to check for key validity, but stay connected and/or associated with a forum account? I mean, if I want to not worry about my certificate renewal at least? And I have to link my key to a forum account? "All key purchases and upgrades are also handled exclusively via the UPC" Ah hell no, when I bought into unraid, it was because it was a one-time thing, with no strings attached. But you're telling me now that it will be hardware and data I own, under my roof, that will basically join an online cloud just to kindly ask the permission to run properly? I'm out. Just out. I'll see how it goes, but I'll be preparing to move over to an other OS in case I end up not liking it as much as I think I won't like it. There is a reason I don't use ubiquiti gear, or any IOT devices that rely on cloud services instead of local ones. There is a reason I use local accounts on windows and not M$ ones. The whole point of self hosting is to not depend on external entities for services or access to your own stuff. Unraid might just have lost its whole point there to me, and the user-friendliness won't be enough anymore to justify it to exist in my rack compared to Pr*xm*x which I'm very fine with too and already run several nodes of.
  5. Hi everyone! Some of you, like I do, don't actually have a room or a closet dedicated to their servers. Some of you, like I do, don't have a basement or a garage so they can tuck them away from the living space. Some of you, especially myself, even sleep in the same room as their open rack. Yup. And even if you don't, you may have environmental conditions, or noise requirements, that are way different than what the handful of cooling profile available in iDrac are initially made and tuned for. That's exactly for that reason I basically made myself an assortment of scripts to have my servers' cooling managed automatically, but from parameters I can fine tune. You will find in this github repository the script as I published it make to be standalone* and can be ran on unraid as-is as a cron job, eventually through the user-scripts plugin. *unlike the ones I actually use, that are very setup-specific (detailed in the README.md). The fancontrol.sh script basically gives you everything you need, I tried to detail it step by step in the commented notes, and should provide you every indication needed to use it. I would though advise you to read carefully the README.md, since it contains also warnings concerning the fact of running the script on the same machine it is controlling the cooling of. Mind the requirements also and annotations. If you're on iDrac Express for example, an upgrade might be doable on the cheap. If you want to go a bit more in depth with what you can do with iDrac on these generations of servers and IPMI tools, you can find here a little explicative summary of stuff and commands I know about iDrac, since these decade old resources are slowly disappearing and scattered across the web, or even only findable through web.archive.org 's wayback machine. I'm in no way an expert, and completely open to being schooled constructively. That's what forks and pullrequests are for on github.
  6. Hi people, here's the thing, I have a router that doesn't support VLAN (long story short, ISP router, proprietary GPON, can't be arsed to have an edge router). As such, my VLANs are different networks all together and really only exist on my L2 switch to segment the ports that should be on this or that network. Consider eth0 and eth1. None is configured in UNRAID to be VLAN aware. Again, VLANs are only setup as untagged anyways on the switch. eth0 is on VLAN1, ISP router network XX.XX.7.XX/24 . From this network, from any other computer that is on it, I can access every service running, and the Web GUI, working as intended and expected. eth1 used to be a direct connection, through a DAC cable, between my server and my workstation. Server is set to a static IP on XX.XX.42.XX/24, and so too is my workstation. With a simple DAC cable linking the two together back to back, yeay, 10GBPs transfer works just fine, can access dockers and stuff, all fine. If I plug the both of them on the switch, on ports that are assigned to VLAN42 (in which they are "alone", there is no router router on that network nor VLAN), they can ping each other, the workstation can SSH into the server, and I can even $smbclient into the shares of the workstation from the server! But no webgui, no talking to the dockers that are on host nor bridge, NOTHING. What gives?
  7. I don't know if you still have your issue at the moment, but I basically encountered a similar issue on 6.8.x with my R720 onboard NIC, it would just drop the eth link and no amout of pounding internet cables in RJ45 ports would fix it. I would have to restart the server several time to have what I would call a "good boot" where it doesn't loose connection in the 30minutes-1hour of uptime frametime. I ended up splurging on a PCIe Base-T NIC, that held flawlessly since. Then since 6.8.3, and worse since 6.9, it's my OTHER PCIe NIC, an SFP+ one that started to have this issue. And know what? NO one, in 10 months, cared enough to take a look at it. A deal breaking issue, for a server, to randomly loose connection, and no one gives a foo about it. SO my call would be for you to take a loss on a PCIe express slot and half a hundred quid+ for a PCIe NIC, or to switch to an other server distro than unraid 6.9, because at this rate it aint gonna be fixed before Unraid 42.0
  8. Hello hi, about 2 weeks into using unraid 6.9, trying 2 of my 3 known working SolarFlare SFN5122N cards, and I already had to reboot TWICE because Unraid just casually and randomly forgets a network card is supposed to actually fulfill a function. Basically, not other info than "ethx link is down, please check cables". Oh I've got cables. direct attach copper or fiber, which color, length or thickness do you fancy?, and even using the ones that never dropped a packet in back to back connection on my proxmox and workstation boxes, Unraid still aint no clue a cable is plugged in at all, because while windows and proxmox have visibly no issue seeing and using these NICs, apparently Unraid hasn't got its goggles on and a bad case of wrist stumps. Point is. I can't use the onboard NIC and have a NetXtreme II BCM57810 dual 10gbps for ethernet cabling, because unraid's being a dumbass with the onboard NIC, it only works when I passthrough the thing to a VM, great, I don't need that! I apparently now can't use my SFP+ Solarflare card either and will have to get an other sku/brand because Unraid can't deal with that either? And in 10months basically no one cares. That's awesome.
  9. Hi! Back again, not with an issue but just a question this time. I know that using configuring swag a certain way and using a cloudflare domain, you can hide your WAN IP behind CloudFlare's. Instead of using a VPS/VPN as a layer between my server and the world, I'm tempted to do the same. BUT, I'm not at cloudflare. My domain (and webhosts) are at OVH's. I know OVH has letsencrypt certificate, CDNs and all the bells and whistles, but I would like directions (or even step by step if you don't mind) about how to set up swag with OVH to keep my WAN ip private. It's pretty critical to nail this the first time, as I can't allow myself too much downtime. My Owncloud, Nextcloud, and virtualized desktops are used on a daily basis for some pretty time-sensitive jobs, and I would be as afraid of a few hours downtime as I would of an attack targeting my WAN IP directly, the later being still a very real threat with the actual setup. For info, the actual setup is the "good ol no-ip gig", aka, using a no-ip tracker with the "front facing" OVH domain CNAME field filled with the no-ip domain, and swag handling the SSL certificate to the OVH domain.
  10. @grphx @CrimsonBrew @Widget Nope. Unless trying some weird technics I have seen flung around, UEFI seem to be a no-go (at least on a R720) because on the onboard video chip, which you can disable for an add-in video card but then you loose Idrac's remote screen and access to bios at boot time. Though the importance of booting in legacy and UEFI isn't that big of a deal for virtualization purposes, since the boot method of the host has minimal impact on the vm's boot method, and legacy boot allows sometimes some peculiar hardware to work in passthrough when it derps in UEFI. Honestly, I don't think we're missing much. UEFI boot on unraid I think gets only usefull in some cases I heard about people having their server not being able to boot properly in legacy after some update, but that switching to UEFI solved the trick (on way more recent hardware than our beaters here, though).
  11. Okay, update. Since that fiasco, I basically dropped $$$ into a second add-in NIC, a double 10gbps RJ45 card (to be eth0), on top of the original 4 1gbps nic and the already added double 10gbps SFP+ card. And so far it was working great! Except today. After 72days of uptime, it crashed... but not the OG nic! The SFP nic is now the buggy one! unplugging and plugging back in didn't do a thing. Reboot fixed it... but I still find it surprisingly unpredictable and unsolvable behavior, that wouldn't be much of an issue for a desktop distro, but quite worrying on a server-oriented distro, AND on server hardware, mind you. (I wouldn't make much of this kind of issues on a desktop, it's really because it's a server that I take it seriously.) procyon-diagnostics-20201222-1913.zip
  12. Lucky you! I don't even have the Configure button! (Dell Poweredge R720)
  13. Do you realize the issue by itself is already troubleshot and solved? And that it may be linked to some certain hardware and that your precise situation isn't a miracle solution for everyone else?
  14. @rachid596 I'ld advise you to read a bit more closely because I'm already on 6.8.1 and with the plugin, and the precise topic is about how it doesn't work with that setup. You're welcome.