BVD

Members
  • Posts

    303
  • Joined

  • Last visited

  • Days Won

    1

BVD last won the day on August 3 2022

BVD had the most liked content!

Retained

  • Member Title
    Network Janitor

Recent Profile Visitors

2545 profile views

BVD's Achievements

Contributor

Contributor (5/14)

112

Reputation

  1. Not as a default, not in my opinion anyway, as setting this to a value of 1 can be dangerous. The setting itself determines how the OS responds to memory allocation requests from applications, with the options being (take some of this with a grain of salt, been a while): 0 - default, responds to allocation requests via an algorithm to determine how much memory can be allocated based on currently reserved, free, and committed memory. Typically safe. 1 - always accept any memory allocation request, regardless of (anything) 2 - never overcommit memory (never reserve more memory than actually exists, e.g. fail the allocation request if not enough exists) In a system like unraid where we've no swap by default, setting to 1 could be problematic for some, especially lower memory systems, and should (I feel at least) have *some* kind of consideration from the user prior to making such a change (meaning 'make the user have to set this themselves, so at least they've the chance to consider the implications' lol). There are numerous use cases where it's beneficial to set vm.overcommit_memory=1, just have to be aware of the consequences... Which are potentially crashing your server if unraid attempts to allocate memory for itself (mover running, parity check, file access, etc) when there's not enough available. If you plan to set vm.overcommit_memory to 1, it's important to be more cognizant of system memory utilization, being sure to monitor memory usage more closely than otherwise. I'd also consider setting up the swapfile ('fake it till you make it' ram) if you've any concerns over whether or not you've enough memory to handle all you're running on your server.
  2. So I apparently finally hit the tipping point towards experiencing what you were seeing with Lidarr - seems to be somewhere in the 65-70k track range, where the way the queries to the DB are formulated means the sqlite DB just absolutely chonks in protest. I finished converting Lidarr over to postgres last night, and while it's still sub-optimal IMO from a query perspective, pg is basically able to just brute force its way through. Start-up times cut down to maybe a tenth of what they were previously, and all UI pages populating within a couple seconds at most ๐Ÿ‘
  3. You just made my day ๐Ÿฅณ
  4. Also wanted to note - while a plugin version would be great as a bandaid, this kind of thing really should be part of the core OS longer term imo (so it doesn't require the user to seek out the plugin to monitor their drives)... It's one of the few areas where I feel that UnRAID lags significantly behind as a platform. It's one of the most basic functions of a NAS, so hopefully this can eventually make it in to the OS - no new packages needed, just management / UI thankfully! My justification here is mainly that every other NAS OS out there, both free / open source, as well as commercial options, they all have this baked in (all I've ever put my hands on at least!) - OpenMediaVault: Truenas: Rockstor - it requires manual input, but is still all UI with tooltips, so... I guess that counts? lol: (etc etc)
  5. It's thankfully even much easier than that, at least for the 'short' tests - just need to run: smartctl --smart=on --offlineauto=on --saveauto=on /dev/sdX Output looks like: smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.19.17-Unraid] (local build) Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org === START OF ENABLE/DISABLE COMMANDS SECTION === SMART Enabled. SMART Attribute Autosave Enabled. SMART Automatic Offline Testing Enabled every four hours. As to the 'long' tests though, I'd not really looked at my disk monitoring setup since the 6.9 (?) changes to how modprobe functioned in UnRAID (not that it's directly related lol), but it should *REALLY* simplify this since smartd comes with the base OS as part of smartmontools. Copy /etc/smartd.conf to to your flash drive, then denote the drives to run against and when you'd like them ran - then link the file to the source location, and you're done (I added it to my list of symlinks which get generated on first array start). I just removed my original scripts for this, much cleaner now! I'd imagine there's a relatively easy way to populate the conf file based on the disk settings given the user chooses the controller type for their drives as part of the disks config within UnRAID already, so finding / sorting / populating based on that UnRAID config should make it pretty well fully hands off for the user ๐ŸŽ‰
  6. Technically correct (the best kind of correct!), but I think if one were speaking to a filesystem layman or lifetime windows user, its the closest equivalent available, and if you were to simply leave it at "zfs has no fsck" without further explanation as above, it may leave them with a potentially undeserved negative sentiment. Either way, once more, this wasnt posted for you specifically, but as an additional breadcrumb trail for others who might come across this through search (etc) to give them some additional search terms to help ๐Ÿคท As this is a feature request post (and the core issue of corruption has been resolved, at least in this instance), I wouldn't really expect any sort of timeline for a response directly from limetech honestly... Not only is this something thats being requested for a future build, butgiven the size of the team, they've got to focus their efforts in the forums primarily on support and bug fixes, at least during a period where there's still release candidates needing sorted out (the macvlan issue, for example, is one thats been plaguing folks for quite some time, and may finally be getting ironed out - woot!) Several other things I'd say are worth keeping in mind - There's a plugin already available which would help test for this, 'Fix Common Problems', as @Squid mentioned above. UnRAID can't realistically be a 0 maintenance/monitoring system, just as any other, but there are at least tools out there that can help lighten the load of doing so. The older a drive is, the more active monitoring it should have to keep an eye out for such issues - not sure if your signature is still accurate, but all of the drive models mentioned are at least 10+ years old, and even the generation following is now something like 7-8. When disks get anywhere near this kind of run time, the 'standard' recommendations for data protection can't really be applied (e.g. monthly for parity checks and the like) - with anything over 5 years, I'd be running them at least bi-weekly, as disks often fail extremely quickly at that age. As opposed to slowly incrementing errors, I regularly see them pile up massively over a course of hours, maybe days, rarely lasting weeks. Subsequent to this, desktop drives (but most especially older desktop drives) were/are kinda crap at self reporting issues - this seemed especially true 5+ years ago, though it has gotten at least a bit better over the years. Where an enterprise or NAS drive might show themselves as failed before running out of LBAs to reassign bad blocks to, desktop drives would often happily churn along as though nothing happened, corrupting data along the way. I'd check that disks SMART data / reallocated sector count at the very least. Unraid is somewhat unique on the NAS OS front, in that it builds it's array out of multiple disks containing independent file systems (of varying types) as opposed to building an array out of the physical blocks themselves by combining the disks into a virtual disk - given there's no single reporting mechanism at the FS level which would encompass all supported FS types in the array, there's almost certainly some complexity to reporting individual disk corruption out from underneath the array. Like I said though, I'm not disagreeing on any specific point, and in fact agree that UnRAID could do more here - it should at the very least regularly run SMART tests by default, propagating errors up to the notification system, and I do hope the devs find time to integrate this into the OS so I can remove my scripts for them. It would nearly certainly save folks a lot of pain recovering from backups down the line!
  7. I guess I was sorta getting in to symantics ๐Ÿ˜… With ZFS, I always think of the pool layout as the FS 'equivalent'. Fsck can only really do 'fsck' because its a journaling filesystem (a feature shared with XFS, which is why it has more of a direct equivalent), while ZFS is transactional (as is BTRFS) - there's no direct eequivalent, namely because the way they commit. No journal, nothing to replay, atomically consistent. ^^ Again, just for clarity should anyone else come across this later. If your pool becomes corrupted, there *are* methods to potentially repair it, but they require manual intervention, and the chances for success vary widely depending on the specific circumstances (pool layout, media type, etc).
  8. ZFS does have this, it's just referred to as a 'scrub' instead of fsck: zpool scrub poolName You can also use tools like 'zdb' for more granularity/control of the check/repair (scrub), things like specific block failure reporting, running 'offline' on exported pools, etc. ( just notingfor anyone else that might come across this in the future ๐Ÿ‘ )
  9. Again, I think this would've been caught by a parity check - I can't think of any reason this wouldn't be the case... While I agree that additional checks would be helpful, it seems there *is* actually a 'catch for this in the OS, right? Or am I missing something maybe? I definitely do agree that there's more that could be done to safeguard the data, but also at least want to acknowledge the stuff thats already there of course...
  10. In the sane vein, I'd like to see the 'unbalance' feature brought in to core unraid - the ability to choose at will where you'd like a given share's files relocated to. As my storage needs have grown more and more complex, I find myself having to jump through some pretty significant hoops to get data moved around as I'd prefer it.
  11. They review them fairly regularly, though I suspect they may not respond until they've something of a concrete answer. The way I handle consistency on the array is mult-pronged - Short SMART tests are ran every other day, long tests bi-weekly Running a container called Scrutiny which provides a dashboard for disk health Array parity checks monthly Notifications sent to my phone should there be an error here The Integrity plugin to validate the data on disk matches whats expected It calculates the hash of each file and stores them, then on later runs of the has calculation, generates a notification if the hashes no longer match for whatever reason My expectation is that, if you'd ran a parity check (either manually, or via schedule, you'd have been notified then of the issue. I agree that this is less than ideal in that you'd have the added delay of (however long it is still your next parity check), but at the very least, there is *some* form of check there... I do wish a lot of this was part of the base OS, some kind of sane defaults, then let us change what we want. The fact that there's actually no default schedule for running SMART tests against drives (nor any method to schedule them in the UI actually) is something of a black eye here. I guess I never really thought about it too much, I just kind of 'set everything up', but looking back on it now at least... A lot of this really should be part of the base functionality for *any* NAS OS imo.
  12. I'm betting they mean 'without stopping the array' - I agree that it's something that's sorely missing, and the primary reason I no longer use unraid for anything where uptime is paramount, but with how the parity writes are designed to be handled, pretty sure it'd need a complete rewrite to support this unfortunately ๐Ÿ˜ž
  13. Might get a better response from the Hardware subforum - this one's for completed builds, so I'm guessing most of those who might have input simply wouldn't see this here ๐Ÿ‘
  14. BVD

    Deploy LLDP

    @neuernick @andyb216 @tchmnkyz please feel free to upvote the feature request here - it'll get more visibility there ๐Ÿ‘
  15. That's also expected - watching the filesystem isn't free, for sure! Pretty rare it'd take 1KB/watcher though.