BVD

Members
  • Posts

    327
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by BVD

  1. Maybe we're talking past each other- I guess I'll try ti explain once more- It's not that setting up two radarr instances isnt something that can be done, it's that its additional unnecessary complexity, as well as wastes space with duplicate files- and for someone that doesnt want to waste space, I'm surprised that's the route you went tbh... As for downloading vs rebuilding, two key differences: 1. You dont have to worry about re-locating what was lost, then hoping it's still available, or worse, reripping it. 2. Far more importantly, you continue to maintain access to all your content without the need to wait to re-accumulate it This all is a bit of a moot point though imo for most - if all you ever store is plex data, why even use unraid? Far more likely, theres personal content there as well. If you're ok with losing that along with you media, I suppose that's up to the individual, but I'm not. And for the immediately expected response of "yeah, but you should back that up" - I back up everything. That doesnt mean I ever want to lose access to any of it and have to restore from backup. Hope for the best, plan for the worst. Until weve the ability to create multiple arrays, actual unraid arrays, I dont see why anyone would argue *against* using additional parity when available, unless they've just never encountered drive failure before; it's a poor recommendation. Drive failures happen, and if you've not encountered them yet, you will. May not be tomorrow, maybe not even next year or 3 years from now. But itll happen. If you feel parity isnt a worthwhile investment, you either dont value your data, or dont value your time *AND* trust your backups, both to a degree that is unrealistic for what I feel are the majority of users.
  2. I'll second that! I recently hit 60TB used, after my parents got through adding their stuff to my server so they could access it remotely - that was a friggin wakeup call lol, "your 70% utilized array as of last week is 95% full as of today". Ugh. Tdarr to the rescue! Transcoding just offers the flexibility without the management hassle, and gives me better overall resource utilization as the GPU is used amongst by numerous other resources anyway - nextcloud for facial recognition of photo backups, image detection in the NVR, video acceleration for a webtops container I've been playing with, plex, and now a tdarr node. Might as well get the most out of it!
  3. If you don't mind the noise (they're not *that* noisy tbh), you can save significant funds by going with the exos drives as opposed to the ironwolfs - for about the same price as a 12TB Ironwolf Pro, you can get a 16TB Exos. Same 5 years worth of warranty, still runs cool (mine never hit above 34C, even under parity check), just without the 'recovery' warranty. That's the route I went anyway, and couldn't be happier! I've been trying to find a 3060 for 7 months now so my wife could game with me, both of us running off the Unraid server, but without much luck - The EVGA version was set to be 329 originally, now up to something like ~400 bucks new, and even at that price (which seems absurd for an 'entry' level gaming card lol), they're basically unobtainium it seems like. I'm sure you can find either a good use for yours, or a willing buyer without much trouble! At these nutty market rates, hell, you could probably buy a couple 16TB exos with the funds or something lol
  4. This is almost certainly your router settings. I'd start there.
  5. There are counters to each of these: Why not have everything in 1080p? A. I like having 4k available for in home consumption Why not have duplicate copies of everything then? Or even just dupes of your 4ks at least A. Because its annoying to manage, and adds additional steps and/or complexity even with the highest levels of automation possible. Why parity for plex media? A. Because many of us have limited monthly bandwidth allocations, and waiting 40 months to download 40TB+ (or paying extra for "unlimited"), re-ripping everything or restoring from a backup takes too dang long, and is best avoided where possible. Even losing one 12TB drive worth of data (as you so aptly mentioned) means 12 months rations. If you're ok waiting that long, or else ok being gouged by your ISP instead, that's more than I would be at least. Why transcode at all? A. It's the best way to have to most flexibility in how media can be consumed remotely. Especially important for those on a cable connection, where you might have gigabit downloads, but are limited to something comparitively stupid like 25mbps upload. Idk, these questions dont really seem very thoroughly considered or thought out imo, but my apologies if I've somehow misunderstood them in some way
  6. Are you still dealing with this by chance? Just looking through the log, the main errors I see seem to have to do with resizable BAR; if you could point out timing for when this was encountered (e.g. last issue noted at 1500 eastern, or at least as close as you can recall) that'd help. This seems to be where the issue first starts (from your docker logs): time="2021-10-12T13:25:36.576501793+08:00" level=error msg="873581800cf8e11d577fd6b1fd5934a3fe3665162523052dc21cd102d476fd07 cleanup: failed to delete container from containerd: no such container" Which is followed by this from syslog: Oct 12 13:25:36 Magnus kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Oct 12 13:25:36 Magnus kernel: caller _nv000723rm+0x1ad/0x200 [nvidia] mapping multiple BARs Oct 12 13:25:37 Magnus kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Oct 12 13:25:37 Magnus kernel: caller _nv000723rm+0x1ad/0x200 [nvidia] mapping multiple BARs This all happens right after a login, so it seems like maybe some user-action initiated all this? Not sure... Oct 12 13:25:07 Magnus webGUI: Successful login user root from 10.253.0.2
  7. It's still a WIP - you can follow the dev work on it here: https://github.com/DualCoder/vgpu_unlock/issues/8 I wouldn't expect anything in the short term tbh - this entire GPU generation has been such a friggin heartache for so many folks, I wouldn't expect anything until either the next gen, or if we're lucky, the super refresh. I concur with @Michael_P above - since you're running Intel, that iGPU is about all anyone could ask for when it comes to plex transcoding. IMO at least, this is one of the few clear places that Intel is easily the best choice - any intel system since coffee lake and newer is about as good as it gets as far as bang-for-buck when it comes to a plex server. AMD just doesn't have a viable alternative (that may be mostly on plex though as they've not updated the transcode engine to support APU's nearly as well from AMD), and the efficiency levels you'll get with the iGPU simply can't be beaten. Use the iGPU for media server/NVR/whatever needs, pass through your 3060 to your gaming VM, and enjoy the best of both worlds.
  8. I asked about buying replacement trays and a spare midplane, but they wouldnt even consider anything until I showed them failed hw, so I just stopped buying from em - if I've gotta wait till somethings dead to even order a replacement (let alone however long itd take to ship), that's just not viable for me. Imo the design itself isnt conducive to dual sided air intake imo, not without some dremel work anyway.
  9. There are folks who use unraid for massive archival tasks where tape isnt really an option, such as the web archive, where multiple arrays would be of significant benefit. I dont think it's a big percentage of users or anything, but they're power users for sure, and often have 100+ drives, some running one unraid instance as the main hypervisor, then running nested unraid vm instances to allow for multiple arrays beneath. Definitely not a common scenario... but one way we could see a benefit is by being able to split our usage across multiple arrays, we'd strongly reduce the possibility of a full failure during reconstruction - I dont relish the idea of having more than, say, 12 drives or so in any type of double RAID array (regardless of zfs, unraid, lvm/MD, etc). Still, idk how many actually run that many drives these days, I might be completely off base 🤔
  10. Thanks! Its more-so a dedicated workstation motherboard than a true server platform, but I get where you're coming from. Going workstation is the only simple way to get high quality audio playback capabilities that I've found without resorting to some separate card/device, and that just makes other things needlessly complicated imo. Workstation boards seem to offer the best of both worlds when it comes to home servers - IPMI, gobs of PCIe, system stability testing, double or more the ram channels, and all without losing the benefits of a consumer board (audio, plenty of USB ports, etc). Only downside... they charge a friggin arm and a leg. This was a purchase that was a little over a year in the making, mostly paid for with side-hustles, or I'd never have gotten up the nerve to pull the trigger on it lol. As to the M.2 - I've honestly been quite happy with it! Peaks at about 53°C during sustained heavy IO now that I've got the card arrangement optimized a bit, which is basically ideal for NAND/NVME, and I intentionally went with PCIe 3.0 initially as part of the overall plan to limit unnecessary power consumption. Best of all (as a cheapskate), m.2 is far easier to find great deals on than U.2/2.5" counterparts. If you can find a board that has enough onboard NVME connections to satisfy your needs, I personally say "go for it" - beats the snot out of throwing in an add-in bifurcation card, which not only takes up another slot, but more importantly, adds a single point of failure for all connected devices.
  11. Finally found a few free minutes to update! The server's been running now since Sept 5th without so much as a hiccup! However, it took a little planning to get it that way... The problem area starts with this: Both GPUs are dual slot, and stacking them means the 2070's intake is about 40% covered by the backplate of the 1650S. I then thought to use the intel NIC as the in-between, but it still covers a bit: And if I can avoid covering it at all, all the better - As this is *right* on top of the NVME drives, any additional heat the 2070 radiates means heat added to them. In the end, I went ahead and put the HBA here instead: It's not perfect (nothing is for me I guess), but after running a temperature probe to the HBA and finding it's well within spec, it's about as good as it gets for me, and it'll do for now! Here's (almost) what it looks like as of today: The 32GB DIMMs I ordered didn't show up in time, and I really needed to get this thing up and running before start of business Monday morning so everyone could access their files and backups could pick back up, so this is where we're at till probably Thanksgiving or so. Running through the cards, from the top: 1. Intel 1.2TB NVME - a hold-over from the last server setup, which only exists for caching writes to the unraid array; seems like the md driver is modified as part of Unraid's base deployment, or this would be removed in lieu of LLVM with an XFS volume over the 4 sn750's onboard. BTRFS just doesn't have the performance needed (not to mention other areas of concern ) and I'm too cheap to buy 4 more M.2 drives just to up the capacity lol 2. Intel NIC - pfSense, etc 3. RTX 2070 - this serves two purposes - it's either running my gaming VM whenever I find time to play, or serving an unraid VM for an additional Tdarr node or testing out new things prior to implementing them on the main hypervisor 4. LSI 2308 based HBA - just connecting any of the drives that I don't have onboard connectors for 5. GTX 1650S - main hypervisor's GPU for Plex, Tdarr, and facial recognition in both nextcloud and frigate (well, until I can convince myself that I need a Coral accelerator anyway) Hope to update again sometime after thanksgiving!
  12. I had severe dns latency when using airvpn years back, though never with that level of "exactly 9s" consistency. If I'm remembering right, after pulling my freakin hair out for something like 3 days, I ended up finding ipv6 as the common underlying component... No idea "whose fault" it was - airvpn, RT-ac68u, FF [version 3 at the time], hell even .net as it was on both linux and windows machines, etc. I just found that once I squashed anything that even thought itself to be ipv6, it just... vanished. Idk if that old-as-hell "problem", whatever it was, is still around, but figured I'd mention it given youd mentioned em. Best of luck!
  13. Sounding more like it has something to do with the way either mozilla or pihole is handling the myservers plugin - I'd start systematically narrowing it down from there, it certainly doesnt sound like its something in the servers config at this point
  14. @benmandude I just realized you'd noted you *don't* have the same issue with chrome - that pretty well negates the questions posted above (well... not completely, but mostly hehehe), I really should've re-read before pulling down the HARs lol 😅 We're spending the vast majority of our wait time on DNS lookup: "cache": {}, "timings": { "blocked": 0, "dns": 8999, "connect": 1, "ssl": 0, "send": 0, "wait": 24, "receive": 0 }, "time": 9024, "_securityState": "insecure", "serverIPAddress": "192.168.0.100", "connection": "80" The exact same amount of time for both the dashboard AND main/array pages - I'd start with restarting FF in safe mode. Having this happen only in one browser narrows this down quite a bit, and while it doesn't rule out unraid itself completely, it's certainly far less likely it's something server side. At a glance, I'd venture one of your plugins is still causing issues. While I didn't spend a ton of time on it up to this point, a quick once-over show's us having issues with connecting to graphql, around the same time the my servers plugin is doing it's work. With our DNS lookups taking a full 9s, and in both instances happening right after protocol upgrade... "response": { "status": 101, "statusText": "Switching Protocols", "httpVersion": "HTTP/1.1", ... it seems more likely this is either an issue on the browser side; whether it's specific to Mozilla overall though or just one of your plugins interfering, we'll know more after your testing FF in safe mode.
  15. Awesome! I'll try to check out the HAR tonight if time allows - if not, itll likely be next weekend due to work stuffs. Quick questions - * what're you using as your router (pfsense, opnsense, asus/tplink/etc) default/model, etc)? * as for DNS - is apollo's mac and IP explicitly defined in your router, or relying upon the host and/or dhcp settings?
  16. @Darksurf just a quick update here from some time I spent on this last night - in short, it's a bit of a PITA, and requires both adding new packages, as well as updating existing ones (not just those from the nerd and dev packs, but core libraries within unraid as well). This is honestly pretty dangerous, as any updates to unraid then have a much higher risk of breaking things due to dependency hell. Just to give an idea of where it's at so far: - mkisofs missing - install cdr tools from nerd pack - bison and/or yacc required - pulled bison from Slackware 14.2 repo - augeas/libtirpc missing - libtirpc breaks nerd pack and dev pack updates (which I should've foreseen honestly 😓), augeas dependencies missing, but those can be loaded from dev pack. Eventually decided to set env variables in make so it didn't try to get packages on it's own as we don't really want that anyway... - gperf missing - self compiled and added - libmagick missing - this is part of the file tool, unraid includes 5.36... This is where I stopped. I'm going to have to re-evaluate whether I need this badly enough on the core OS to accept/take the risks associated with all of these modifications... Seems there's likely a pretty valid reason it's not been in Slackware before perhaps? For now I think I'm going to put this on pause and re-evaluate what other options are available.
  17. I'd determined I needed the same (libguestfs as a whole, specifically), and am planning to scope out the amount of work it's going to take me to do so this weekend - if it compiles without issue, I'll try to remember to check what the process is to submit it to community apps as a plugin and you can use it that way 👍 EDIT / P.S. @Darksurf - FYI, looks like someone's made a docker for this use case already: https://github.com/brandonkal/guestfs I'm wanting this to be done on the hypervisor side though, as it pretty well provides the keys to the kingdom, and I'd just rather have that kind of power restricted to the root user.
  18. @benmandude Some things thatd help troubleshoot this: * Testing on separate browser (Edge, Safari, Chromium, anything really) * When you're seeing this, does clearing the browser cache help? If this is a pain, try using a private/incognito window, should provide a relative outcome. * Connecting from another device - browser doesn't matter, just other system that's not logged in to your server's UI at that time And for the "this is everything one could possibly need to see what's happening here" version (or at least point to the next place to check logs from at the very least): * When you are encountering this, create a HAR file recording the occurrence and the share it here for analysis; this link has solid steps for doing so across multiple browser types. PLEASE take special note of the warnings section of the article - while creating the HAR recording, do NOTHING other than recreate/show the loading issue, then stop the recording (don't go logging into your bank or something silly like that lol) If you go this route, attach the file here so it can be gone through. I'm sure the limetech guys can sort it out, but if it comes in this weekend, I should have time to look at it hopefully sometime Monday and see what I can make of it.
  19. Any reason not to just use one of the icons from an app contained within the folder and using grayscale render? Or if I'm misunderstanding the question, can you clarify what you're meaning a bit?
  20. Do you have the Full Width plugin installed? Also, do you notice if this typically occurs with smaller windows (as opposed to full screen, or otherwise)? I've noticed that some pages get kinda wonky when attempting to view them in a window that's been resized smaller, or any time I'm browsing it from something that's only got something like an 800x600 resolution available to it. It's rare for me to do so, so I've just kept both plugins installed and keep it in mind any time I see something screwy like this.
  21. Hard drives fail I guess is the gist of it - it's an expected occurrence whose frequency increases with the number of drives you have attached. As we know hard drives die, it just makes sense to me to minimize the impact of addressing those failures when they do occur (taking everything down to address an inevitable eventuality doesn't seem in-line with the idea of a NAS, at least to me anyway)
  22. The broader use case here is being able to replace a failed drive without taking down everything, but instead minimizing the impact strictly to those components necessary to do so imo. Anyone that's self-hosting, whether for business or friends and family use, loathes downtime as it's a huge pain to try to schedule it at a time that's convenient for all. There are currently 4 family businesses as well as 7 friends that rely upon my servers as I've migrated them away from the google platform, and given even that small number, there are always folks accessing share links, updating their grocery lists, etc. All that stuff is on the pools. For the home nas user with no other individuals accessing their systems for anything, I could see how it wouldnt really matter. But I feel like it's not uncommon for an unraid user to have their system set up in such a way that taking everything down is necessary. As far as reboots, that's a separate thing imo - it could also be addressed in the UI by allowing a samba restart button in the even of an edit to SMB-extra, allowing function level reset to be executed from the UI instead of cli for VFIO binds, and so on. Most of these things can be done without reboots on more modern hardware, it's just not yet available in the UI. To me, this is a big logical step towards getting making things smoother for the user.
  23. +1 from me as well. I'd like to try to help justify the efforts with a quick list of use cases: Firewall/Router - As others have noted, many of us run pfSense, OPNsense, ipFire, VyOS, etc. Virtualizing the router makes some sense when running an unpaid server given we're running many applications whose sole function relies heavily upon networking, as the horsepower necessary to properly manage/monitor that traffic (traffic shaping/policing, threat detection and deep packet inspection, and so on). When trying to make the most efficient use of resources, having a separate physical machine for this isn't as cheap as buying an SBC like a pi/odroid/whatever, but can quickly add up to hundreds of dollars (not to mention the electricity efficiency lost by not having it in the same box) Auth / Domain Controllers - For self-rosters and businesses alike, LDAP, DNS, TOTP, and others are often needed. I currently run mine in a VPS, as I can't have users ability to authenticate lost simply because I need to replace or add a drive. Home Automation - While many choose to use docker containers for this, others use Home Assistant OS. Having all your automation go down every time you have to take down the array is a significant annoyance. As time goes on and home automation becomes more and more mainstream, I can see a time in the not-to-distant future where it's not abnormal to have access to your home (integrated door locks) are considered 'normal'. The impact of completely losing access to your home automation's functionality will grow as well. Mail server - I doubt many of us are running our own mail servers, but I know at least a few are. I'm willing to bet that those who are, are also running unraid as a vm themselves under proxmox/vmware/(etc), because this is something you absolutely *can't* have go down, especially for something as simple as adding storage. I'm sure there are others I'm missing, but these are the big ones. Once ZFS is Integrated, it'd be great to get some attention to this - I understand there's some complexities here, so I'll attempt to address some of them where I can with some spitballed ideas: Libvirt lives on the array; how do we ensure it's availability? Make the system share have a pool only (cache, but could be one of any number of pools) assignment requirement ('Offline VM Access - Checkbox here') in order to enable this feature. (think this is the easier method), or - Dedicated 'offline' libvirt/xml storage location - this seems like it'd be more cumbersome as well as less efficient given the need for dedicated devices The array acts as one single storage pool to all applications; how can we ensure the VM's resources will be available if we stop the array? Unassigned Devices - As @JonathanM noted, should limetech take over UD, it could be used to set this completely outside the active array, or Split the 'stop' function of the array - In conjunction with option 1 above, adding a popup when selected stating that 'capacity available for VMs will be limited to the total capacity of the assigned cache pool' or something.. but instead of a one button 'it's all down now', create two separate stop functions One for stopping the array - A notification pops up warning that all docker containers will be taken offline, similar to current popup. Then another for shutting down both the array and pools - Popup again notes impact Idk how useful any of these musings are, but it's been on my mind for a while so I figured what the heck lol. Anyway, BIG +1 for this feature request!
  24. @GuildDarts Apologies if this isn't really a suitable place for this, but thought I'd ask - As a feature request, what about the possibility of adding simple folder functionality to the User Scripts page as well (plugin from @Squid)? I was helping a friend with their unpaid server last night, having him walk me through the steps he'd taken to recreate an issue, and in the course of this saw his user scripts page... It had something like 40+ scripts that he'd accumulated, of which around half of them were manual activation only and used infrequently (though I confirmed he *does* actually still use them). It was a mess of scrolling lol. I mostly just use cron, so I'd not really experienced this, but I'd imagine there are others who'd similarly benefit from some organization ability in the user scripts space. The reason I came to ask here was that we've already got the great folder functionality for docker and VMs in the plugin, and thought to check if this kind of thing was in line with the spirit of the plugins purpose; I could foresee having folders/buttons for each of the options (daily/weekly/monthly/etc), then selecting one of them to drop-down the contents of each and display them as they would normally. If this doesn't really jive with the existing plugin's purpose/design (and I'd understand for sure, as these are static displays that don't really need 'monitoring', and that monitoring is one of the huge values of this plugin), no worries at all, and I'll hit up @Squid's plugin's support page. Thanks for taking the time!
  25. Also, I don't suppose you're using one of the new fractal torrent cases are you? Even if not, if you're using a fan hub, it'd be one of the things in the chain I'd remove during testing to narrow things down 👍