Jump to content

BVD

Members
  • Posts

    335
  • Joined

  • Last visited

  • Days Won

    1

BVD last won the day on August 3 2022

BVD had the most liked content!

Retained

  • Member Title
    Network Janitor

Recent Profile Visitors

3,955 profile views

BVD's Achievements

Contributor

Contributor (5/14)

137

Reputation

  1. Every Maxtor I ever worked with, you could fry eggs on ๐Ÿคฃ What I really want to mention though - if your drives are anywhere *near* 48ยฐC, you've got a significant cooling / airflow problem, and I'd recommend looking in to it sooner rather than later. ~50ยฐC is fine for NAND/SSDs/NVME, no worries there. But HDDs become significantly less happy the further above 40C they run.
  2. I have mine going to a zfs fileset with zstd-5 compression configured - heavy compression usually reaps significant benefits for time series data, and totally worth the (relatively small, at least in my experience, and with a modern processor) performance tradeoff.
  3. @T_Matz You would need to edit the actual dashboard file itself to remove those panels completely, opening the .json file and removing the corresponding lines
  4. @UncleStu What do the Telegraf logs show? Should give an indication as to what's going on. You may need to update telegraf, and the logs there should indicate what perms issues need to be addressed as well actually ๐Ÿ‘
  5. You can always trim down the number of panels to suit your specific requirements. It is Plex only - if someone wanted to pull emby/jellyfin/(etc), you'd need to have a separate set of requirements since the API calls necessary to collect the data (and the applications which are responding to those calls) would be separate for each. Kind of an aside, but I think it's worth at least mentioning for anyone coming in to this fresh - I'd be willing to bet @falconexe has invested several hundred hours in to this over the years, so even if it only takes someone ~5% of the total "from-scratch" creation time to tweak it to match their specific combination of server + storage + containers + share config + (all the things)... Most should probably expect a needing a handful of evenings of dedicated effort to get the dashboard up to a fully functional state. (extra random thoughts, hoping 'spoiler'-ing it keeps this post from being too long unless someone clicks on it lol)
  6. As this is not related to the UUD specifically, I'd ask in the template author's support post - though a quick search / google should pull it up, I'd imagine it's in InfluxDB's documentation ๐Ÿ‘
  7. personally I still use 6.11 due to issues with customized zfs datasets features being reverted in later releases. Also, I dont like having to start the array for my zfs pools to become available lol
  8. personallylly I still use 6.11 due to issues with customized zfs datasets features being reverted in later releases. Also, I dont like having to start the array for my zfs pools to become available lol
  9. Looks like the release notes are missing: Checked the github and doesn't appear to be posted there either, though looking at the commits it seems like it should just be something like "verify no LLDP service is running during install/upgrade" I think?
  10. Hope things are going better for you these days @gyto6 โ™ฅ๏ธ Looking forward to hearing more about the use cases and implementation!
  11. As mentioned above, it works - not sure where you got this information? If it's documented somewhere, that documentation is definitely incorrect (source: I'm running servers from 6.9.2 -> 6.11.5, all of which still work with CA just fine ๐Ÿ‘)
  12. After re-reading the post, I think this comes down to how the announcement was worded - calling it a 'subscription' I don't think is quite right, as that term infers 'rental' in most people's minds. If instead they called it something like 'license support', that'd be more apt to have folks read it and have it reflect more of the ground reality I think...? @limetech I was talking to a couple friends of mine here at work (enterprise data mgmt), where we've just recently been undergoing much the same change in revenue models (one time vs recurring), and when they read the announcement I linked their way (because only a fraction of a percent will watch the full podcast), the first responses I got back were all some form of "the enshittification of all things tech continues, not shocked"... However after I explained a bit further (having listened to the podcast), they were much more amenable to the idea, and their only real issue was with the announcement itself for the most part. Some thoughts I'd had here: While I know that having time-based and calendar-locked recurring revenue makes things FAR simpler for all things fiscal (forecasting revenue, planning hiring, etc), I feel that the current sentiment towards such models is making this go down harder than is actually necessary. The way unraid handles releases is honestly a bit problematic for handling this any other way though, unfortunately (imo). There are often improvements/enhancements included with even point releases (which increases the potential for introducing new bugs), so there is no 'maintenance only' type of release. For example, 6.12's big feature was ZFS integration; but over the course of the last 8 point releases, we now have a new drivers page, new UI buttons to show/hide individual or all items at once, enhancements to package handling, etc. Some recommendations towards the above: I'd propose LimeTech create a software support lifecycle policy - e.g. each major release of UnRAID shall receive continued support (e.g. bug fixes and security updates) for X term (I've been seeing 18 months as fairly typical, but 12 isn't unheard of). This gives an actual cadence that customers can then plan against, as opposed to paying for a 'period of upgradability', a term during which they've no idea what releases may or may not come Charge by major release instead of based on time period - This way, there's no 'unknown' in the customer's mind as to what they're receiving for what they've paid; they know exactly how long they are going to get security updates for and for exactly how much. Only bug fixes allowed in point releases - Most mature software orgs I've worked with in the enterprise have a version schema something like "7.3.7-p3", where anything with the "-p#" is known solely to have included bug fixes found within that build (so for this example, 7.3.7 has had 3 bug-fix / security-patch releases). Since UnRAID doesn't follow such a schema, we'd instead have something like "6.13.2", where 6 is the major release (such as the big ZFS release), 13 is the minor/enhancement release (would include all the small enhancements normally in point releases currently), and 2 is the number of bug-fix/security-patch updates that it's received Possible LTS enabled by the above (oft req'd feature) - If moved to this new schema, Limetech could choose to finally implement the (seemingly heavily requested) option of an LTS UnRAID build (which would have to be an LTS kernel for it to make business sense to Limetech of course). Since you're charging based on the release, not the time window, you can opt to charge more for the LTS release in order towards offsetting the additional costs. Heck, you could even choose to do the time based revenue model initially proposed for these LTS releases, and since *no improvements* are included in point releases, this helps to mitigate some of the development overhead one might typically associate with such a strategy. I know that these proposals are a fairly high bar, and may even appear a bit daunting at first read... But much of the additional work that'd be required can be augmented by build automation tools. Argo or Flux CD could be built up to automate creating and validating these LTS point releases, for example, making maintaining two kernels far less painful than it otherwise could be. After the initial build-out's completed (and it'd take a little time there at least), it's associated costs effectively amount to a percentage of one DevOps guy's time thanks to the fact that much of the amazing features in UnRAID are plugin based (additionally helped further in that they're community supported). I've got about a thousand ideas for all of this, but that's mostly as it's the world I live in at work each day, and while I know this wasn't short (...lol...), I hope at least some of it is some-kind-of helpful! Sincerely wishing all the Limetech folks the best as you're going through this transition ๐Ÿ‘
  13. I'd call it an "OS support fee". In all larger enterprises I've worked for (pre inability to own anything as has become the enterprise norm), software upgrades were part of a support contract. In this instance, its more like a "license support" situation than "software support" (e.g. you own the hardware, but support contracts allow access to upgrade that hw), but similar ideology. Personally, I feel like its about time unraid did this, and am glad for the move. While I'm not really as much of an unraid user as I once was, for the target market of the product, I think its more than fair, and passed time ๐Ÿ‘
  14. As someone who spends probably a third of my day working through kibana/sentry/grafana, the amount of appreciation I have for your work on this is simply impossible to fully convey... ... Don't suppose you're looking for a job by chance, are you? ๐Ÿ˜…
  15. Not sure what your background might be (whether veteran IT, hobbyist, etc), but this would almost certainly require some docker architectural knowledge to set up properly for your specific environment imo - depending on your setup, it may need to run in privileged mode (as root), for example, in order to allow it to control all other containers' network flows, have awareness of any docker networks you've created, etc. Then as long as you're familiar linux, it's all just mapping what you'd normally use for traffic shaping on a linux host over to the equivalent docker components (e.g. NIC => docker network, container ID => host). If looking for an easier way to handle it, you could chose to enable host access to custom networks and assign them their own addresses, then manage traffic shaping via either your switch or router's UI. That's probably the most straightforward method for most, and what I'd probably recommend ๐Ÿ‘
ร—
ร—
  • Create New...