BVD

Members
  • Posts

    327
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by BVD

  1. Looks like the release notes are missing: Checked the github and doesn't appear to be posted there either, though looking at the commits it seems like it should just be something like "verify no LLDP service is running during install/upgrade" I think?
  2. Hope things are going better for you these days @gyto6 โ™ฅ๏ธ Looking forward to hearing more about the use cases and implementation!
  3. As mentioned above, it works - not sure where you got this information? If it's documented somewhere, that documentation is definitely incorrect (source: I'm running servers from 6.9.2 -> 6.11.5, all of which still work with CA just fine ๐Ÿ‘)
  4. After re-reading the post, I think this comes down to how the announcement was worded - calling it a 'subscription' I don't think is quite right, as that term infers 'rental' in most people's minds. If instead they called it something like 'license support', that'd be more apt to have folks read it and have it reflect more of the ground reality I think...? @limetech I was talking to a couple friends of mine here at work (enterprise data mgmt), where we've just recently been undergoing much the same change in revenue models (one time vs recurring), and when they read the announcement I linked their way (because only a fraction of a percent will watch the full podcast), the first responses I got back were all some form of "the enshittification of all things tech continues, not shocked"... However after I explained a bit further (having listened to the podcast), they were much more amenable to the idea, and their only real issue was with the announcement itself for the most part. Some thoughts I'd had here: While I know that having time-based and calendar-locked recurring revenue makes things FAR simpler for all things fiscal (forecasting revenue, planning hiring, etc), I feel that the current sentiment towards such models is making this go down harder than is actually necessary. The way unraid handles releases is honestly a bit problematic for handling this any other way though, unfortunately (imo). There are often improvements/enhancements included with even point releases (which increases the potential for introducing new bugs), so there is no 'maintenance only' type of release. For example, 6.12's big feature was ZFS integration; but over the course of the last 8 point releases, we now have a new drivers page, new UI buttons to show/hide individual or all items at once, enhancements to package handling, etc. Some recommendations towards the above: I'd propose LimeTech create a software support lifecycle policy - e.g. each major release of UnRAID shall receive continued support (e.g. bug fixes and security updates) for X term (I've been seeing 18 months as fairly typical, but 12 isn't unheard of). This gives an actual cadence that customers can then plan against, as opposed to paying for a 'period of upgradability', a term during which they've no idea what releases may or may not come Charge by major release instead of based on time period - This way, there's no 'unknown' in the customer's mind as to what they're receiving for what they've paid; they know exactly how long they are going to get security updates for and for exactly how much. Only bug fixes allowed in point releases - Most mature software orgs I've worked with in the enterprise have a version schema something like "7.3.7-p3", where anything with the "-p#" is known solely to have included bug fixes found within that build (so for this example, 7.3.7 has had 3 bug-fix / security-patch releases). Since UnRAID doesn't follow such a schema, we'd instead have something like "6.13.2", where 6 is the major release (such as the big ZFS release), 13 is the minor/enhancement release (would include all the small enhancements normally in point releases currently), and 2 is the number of bug-fix/security-patch updates that it's received Possible LTS enabled by the above (oft req'd feature) - If moved to this new schema, Limetech could choose to finally implement the (seemingly heavily requested) option of an LTS UnRAID build (which would have to be an LTS kernel for it to make business sense to Limetech of course). Since you're charging based on the release, not the time window, you can opt to charge more for the LTS release in order towards offsetting the additional costs. Heck, you could even choose to do the time based revenue model initially proposed for these LTS releases, and since *no improvements* are included in point releases, this helps to mitigate some of the development overhead one might typically associate with such a strategy. I know that these proposals are a fairly high bar, and may even appear a bit daunting at first read... But much of the additional work that'd be required can be augmented by build automation tools. Argo or Flux CD could be built up to automate creating and validating these LTS point releases, for example, making maintaining two kernels far less painful than it otherwise could be. After the initial build-out's completed (and it'd take a little time there at least), it's associated costs effectively amount to a percentage of one DevOps guy's time thanks to the fact that much of the amazing features in UnRAID are plugin based (additionally helped further in that they're community supported). I've got about a thousand ideas for all of this, but that's mostly as it's the world I live in at work each day, and while I know this wasn't short (...lol...), I hope at least some of it is some-kind-of helpful! Sincerely wishing all the Limetech folks the best as you're going through this transition ๐Ÿ‘
  5. I'd call it an "OS support fee". In all larger enterprises I've worked for (pre inability to own anything as has become the enterprise norm), software upgrades were part of a support contract. In this instance, its more like a "license support" situation than "software support" (e.g. you own the hardware, but support contracts allow access to upgrade that hw), but similar ideology. Personally, I feel like its about time unraid did this, and am glad for the move. While I'm not really as much of an unraid user as I once was, for the target market of the product, I think its more than fair, and passed time ๐Ÿ‘
  6. As someone who spends probably a third of my day working through kibana/sentry/grafana, the amount of appreciation I have for your work on this is simply impossible to fully convey... ... Don't suppose you're looking for a job by chance, are you? ๐Ÿ˜…
  7. Not sure what your background might be (whether veteran IT, hobbyist, etc), but this would almost certainly require some docker architectural knowledge to set up properly for your specific environment imo - depending on your setup, it may need to run in privileged mode (as root), for example, in order to allow it to control all other containers' network flows, have awareness of any docker networks you've created, etc. Then as long as you're familiar linux, it's all just mapping what you'd normally use for traffic shaping on a linux host over to the equivalent docker components (e.g. NIC => docker network, container ID => host). If looking for an easier way to handle it, you could chose to enable host access to custom networks and assign them their own addresses, then manage traffic shaping via either your switch or router's UI. That's probably the most straightforward method for most, and what I'd probably recommend ๐Ÿ‘
  8. This should do what you're looking for: https://github.com/lukaszlach/docker-tc
  9. I was referring to this: <NIXED - I misread> There's several others in there that have relation to this with jonp in them, though this one wasn't, apologies all!
  10. Wow, looks awesome, and done SUPER quick, thank you so much! I don't use the animated icons, as I'd unfortunately never been able to get them to work properly on my end (idk if it's a versioning problem, one unique to my setup, or whatever the case may be lol), but had always thought they looked amazing regardless ๐Ÿ˜ The only animation idea I'd had was something like having the center part of the icon look like it was 'dropping in', having it bounce once before settling. At first the center part is large, covering almost the entire icon tile, then it shrinks down and down till it's a bit smaller than default, finally increasing in size (more quickly than the shrink to simulate the 'bounce') till getting to the expected end size. Seemed like a neat idea, but as I don't (or can't? lol) use animated icons anyway, hadn't really mentioned ๐Ÿ˜…
  11. It was also reported in various news outlets, such as heri. Its Plex deciding, without anyone's consent, that anyone youve shred your library with should also know what youve watched. And this is opt out, meaning you were defaulted in to it, and wouldn't uncover the setting unless stumbled across accidentally... Or after one of the people on your server shoots you an email asking about your latest viewing habits. The bigger concern for me is that it effectively shows they've 0 concern for anyone's privacy more broadly - theres nothing to stop them from doing something like this again, only never telling you, instead just selling your viewing habits to the highest bidder, and this time you'd not have someone reaching out to you so you knew to go change this new setting they implemented. SSO is single sign on - I have several family and friends on my bitwarden and nextcloud instances as well, so instead of adding yet another login they have to remember, I figured I'd set up SSO so they could log in to all three with a single account. I'm currently working on getting all their play histories synced from my Plex server over to jellyfin, and I think that'll be the last step before I start getting people migrated over. Hoping to have the Plex server taken offline by spring... At least thats the timeline ive got planned at this point lol
  12. @hernandito Thanks so much to both yourself and @Josiah for your creative efforts here! Certainly feels nice to have added a little spice to the dashboard ๐Ÿ’š If you're still taking requests, I'd be very interested in a Jellyfin icon replacement, if you've the time at least? I'm using the yellow/gold set, but color agnostic as I can always change it after of course! I'd been working on laying the groundwork for transitioning everyone on my plex server over to jellyfin ever since they added 1st party streaming, but had basically set it aside for the longest time as it didn't seem *tooooo* bad due to all the effort involved - not just setting up jellyfin itself, but implementing SSO for it, compiling all documentation I'd need to communicate to the users transitioning, testing clients to do so, etc... But with the recent *MASSIVE* privacy scandal/issue, that's been the kick in the pants needed for me to pick it back up and try to get it over the line. It'd be awesome to keep the beautiful dashboard scheme with the transition ๐Ÿ˜
  13. Just updating here as I came across a feature request post from @jonp (ex-Limetech) which pre-dates this one. Completely missed it up until now, as it's in the "Unscheduled" sub-forum... Indicating I guess that it is something that's on the Limetech radar to be implemented at some point, right? I sincerely hope so! I really would like to move away from virtualizing UnRAID some day ๐Ÿ˜…
  14. There's a huge amount of discussion around this, going back at least to 2015 - it was very nearly the one thing that kept me from purchasing in the first place actually ๐Ÿ˜ณ I've also tried to articulate use-cases for this to justify the development effort as best I could.... I may also have... ranted... about it a bit ๐Ÿ˜… Seems that the biggest thing keeping this from happening is UnRAID's licensing model - it's been noted that this would effectively require a complete re-write of the core/legacy OS components (array stop/start, device handling, service handling, etc). Understanding it's an utterly massive undertaking... I think it's absolutely a necessity at this point, as it's truly hampering the platform's flexibility more and more as time goes on ๐Ÿ˜“ Aaaanyway, please do upvote / comment on the feature request if you would - the more visibility we can get on it, the more people show the desire, the more likely it is to get the attention it needs to be scoped and implemented! ๐Ÿ’š EDIT: Looks like it goes back at least to ~2014, as a feature request by what was to become a Limetech employee ๐Ÿ˜…
  15. This sounds like a bug imo... Someone not noticing for an extended period would need to copy everything out and back in to reap the compression benefits, and that can be a huuuuge PITA...
  16. It'd be nice to have the image updated, esp now that docs supports dark mode, assuming you've the time available ๐Ÿ‘
  17. I came to the same conclusion myself... I'd previously taken issue with the fact that unraid's design means that 'everything' must be taken offline in order to address an inevitable eventuality for a NAS - adding storage, replacing a failed drive, or expanding capacity with larger disks, these are the kinds of things that are expected to be semi-regularly undertaken with any NAS, and unraid is the only OS I've seen that *requires* down time for even the most basic maintenance. The fact that it can be solved any number of ways (several of which have already been proposed which are completely viable), but hasn't had a whisper from LT has been... Difficult.
  18. I think what you're experiencing is Confirmation Bias. And you sort of side-stepped my point, but that's alright, we can run with it - Yes, a good number of people want VM snapshots and backups in the UI (and absolutely, it'd be a valuable feature) - no, it's not all, nor even a majority (as we can tell from the poll results here). When it comes to unraid user engagement, reddit is a ghostland compared to these forums. VM backups are quantifiably less desired by the unraid user base than... Well, multiple arrays in this case. I'd argue that the 'biggest reason' people move to virtualize unraid is actually two separate reasons - if you read through the history of this forum, the years of posts that've come through within that time, there are two recurring themes: The handling of licensing is *extremely* annoying - having to effectively 'turn off my NAS' to replace or add a drive is utterly absurd. And worse, now that ZFS is 'included', you have to shut down everything in order to do any work on it as well. Any NAS OS should be able to remain online for anything short of a (non-media) hardware failure, or an OS update. This is a pretty huge failure in design IMO. Having to work around all the various "unraid-isms" - if you want to make anything persistent, it takes a bunch of extra steps to do so, steps that are unnecessary on (literally anything else). Even simple things like keeping your bash history, installing a package, they're all so unnecessarily complicated that folks effectively 'graduate' to another OS. Used to be any user could easily recompile the kernel after installing whatever packages they might find useful to them, now you have this location you have to put the package, and wait for unraid to re-install it each time you reboot. More down time (see licensing above). Just a million little cuts like this. In the last 6 months, all of 9k people have downloaded the macinabox image - across all of dockerhubs users, not just unraid. This isn't as common as you seem to be alluding to. In 4 years, it's had ~2.5m pulls, again, across all of dockerhub - how many of those are folks updating to a new image? Or pulling another copy on another machine? What percentage of docker users are also unraid users? If you can show me verifiable numbers that say otherwise, I'm absolutely open to them, and my apologies in advance if so; I just don't believe unraid users running macos in a vm alongside a windows vm is nearly as common as those simply running windows machines, especially given the OS's target audience. As for corporate doublespeak... Lol? I don't know how to respond to this honestly, and I don't mean to be rude here, it just feels more like someone lashing out than working to justify their position in a logical and reasoned way...? I don't have a horse in this race ๐Ÿคทโ€โ™‚๏ธ
  19. +1 !!!! Whole reason I'm still on 6.11.5 ๐Ÿ˜… Really wish I'd thought of this being an issue beforehand - I'd not have agreed so strongly for its implementation lol ๐Ÿคฃ
  20. This definitely isnt the only reason one might choose to virtualize unraid on proxmox... But as I understand it, your point is that "all NAS operating systems should have native virtual machine backup utilities" - is that correct? Assuming yes, I'd frankly disagree. Unraid's primary target audience (I believe) is the average home user who's looking for an efficient NAS OS, and the "average" consumer's main usage for a VM would *typically* be relegated to a windows gaming machine - something that unraid actually handles as well or better than anything else ive tried... And for backing up that one VM? You can do this just as you would any other Windows machine. Virtual machines are inefficient when compared to containerized workloads, and CA makes it extremely simple for even a novice to quickly spool up a container for any number of needs. I understand your request is for built in VM backup functionality, and I certainly agree that this would have value - I'm simply saying I dont necessarily agree with the premise that it forces users to virtualize unraid, or that lacking native VM backups is something of an "Achilles heal" given its most common use cases and customers.
  21. They're logically completely separate devices and can be treated as such
  22. Yup! All based on the PCI address, so 02:00.1 could have 4 VFs and leave 02:00.0 untouched.
  23. The Stats plugin does this pretty well (Sorry for the crappy screenshot, I'm mobile currently)
  24. Not as a default, not in my opinion anyway, as setting this to a value of 1 can be dangerous. The setting itself determines how the OS responds to memory allocation requests from applications, with the options being (take some of this with a grain of salt, been a while): 0 - default, responds to allocation requests via an algorithm to determine how much memory can be allocated based on currently reserved, free, and committed memory. Typically safe. 1 - always accept any memory allocation request, regardless of (anything) 2 - never overcommit memory (never reserve more memory than actually exists, e.g. fail the allocation request if not enough exists) In a system like unraid where we've no swap by default, setting to 1 could be problematic for some, especially lower memory systems, and should (I feel at least) have *some* kind of consideration from the user prior to making such a change (meaning 'make the user have to set this themselves, so at least they've the chance to consider the implications' lol). There are numerous use cases where it's beneficial to set vm.overcommit_memory=1, just have to be aware of the consequences... Which are potentially crashing your server if unraid attempts to allocate memory for itself (mover running, parity check, file access, etc) when there's not enough available. If you plan to set vm.overcommit_memory to 1, it's important to be more cognizant of system memory utilization, being sure to monitor memory usage more closely than otherwise. I'd also consider setting up the swapfile ('fake it till you make it' ram) if you've any concerns over whether or not you've enough memory to handle all you're running on your server.
  25. So I apparently finally hit the tipping point towards experiencing what you were seeing with Lidarr - seems to be somewhere in the 65-70k track range, where the way the queries to the DB are formulated means the sqlite DB just absolutely chonks in protest. I finished converting Lidarr over to postgres last night, and while it's still sub-optimal IMO from a query perspective, pg is basically able to just brute force its way through. Start-up times cut down to maybe a tenth of what they were previously, and all UI pages populating within a couple seconds at most ๐Ÿ‘