BVD

Members
  • Posts

    140
  • Joined

  • Last visited

Converted

  • URL
    vandyke.tech

Recent Profile Visitors

1176 profile views

BVD's Achievements

Apprentice

Apprentice (3/14)

46

Reputation

  1. There are folks who use unraid for massive archival tasks where tape isnt really an option, such as the web archive, where multiple arrays would be of significant benefit. I dont think it's a big percentage of users or anything, but they're power users for sure, and often have 100+ drives, some running one unraid instance as the main hypervisor, then running nested unraid vm instances to allow for multiple arrays beneath. Definitely not a commin scenario... but one way we could see a benefit is by being able to split our usage across multiple arrays, we'd strongly reduce the possibility of a full failure during reconstruction - I dont relish the idea of having more than, say, 12 drives or so in any type of double RAID array (regardless of zfs, unraid, lvm/MD, etc). Still, idk how many actually run that many drives these days, I might be completely off base 🤔
  2. Thanks! Its more-so a dedicated workstation motherboard than a true server platform, but I get where you're coming from. Going workstation is the only simple way to get high quality audio playback capabilities that I've found without resorting to some separate card/device, and that just makes other things needlessly complicated imo. Workstation boards seem to offer the best of both worlds when it comes to home servers - IPMI, gobs of PCIe, system stability testing, double or more the ram channels, and all without losing the benefits of a consumer board (audio, plenty of USB ports, etc). Only downside... they charge a friggin arm and a leg. This was a purchase that was a little over a year in the making, mostly paid for with side-hustles, or I'd never have gotten up the nerve to pull the trigger on it lol. As to the M.2 - I've honestly been quite happy with it! Peaks at about 53°C during sustained heavy IO now that I've got the card arrangement optimized a bit, which is basically ideal for NAND/NVME, and I intentionally went with PCIe 3.0 initially as part of the overall plan to limit unnecessary power consumption. Best of all (as a cheapskate), m.2 is far easier to find great deals on than U.2/2.5" counterparts. If you can find a board that has enough onboard NVME connections to satisfy your needs, I personally say "go for it" - beats the snot out of throwing in an add-in bifurcation card, which not only takes up another slot, but more importantly, adds a single point of failure for all connected devices.
  3. Finally found a few free minutes to update! The server's been running now since Sept 5th without so much as a hiccup! However, it took a little planning to get it that way... The problem area starts with this: Both GPUs are dual slot, and stacking them means the 2070's intake is about 40% covered by the backplate of the 1650S. I then thought to use the intel NIC as the in-between, but it still covers a bit: And if I can avoid covering it at all, all the better - As this is *right* on top of the NVME drives, any additional heat the 2070 radiates means heat added to them. In the end, I went ahead and put the HBA here instead: It's not perfect (nothing is for me I guess), but after running a temperature probe to the HBA and finding it's well within spec, it's about as good as it gets for me, and it'll do for now! Here's (almost) what it looks like as of today: The 32GB DIMMs I ordered didn't show up in time, and I really needed to get this thing up and running before start of business Monday morning so everyone could access their files and backups could pick back up, so this is where we're at till probably Thanksgiving or so. Running through the cards, from the top: 1. Intel 1.2TB NVME - a hold-over from the last server setup, which only exists for caching writes to the unraid array; seems like the md driver is modified as part of Unraid's base deployment, or this would be removed in lieu of LLVM with an XFS volume over the 4 sn750's onboard. BTRFS just doesn't have the performance needed (not to mention other areas of concern ) and I'm too cheap to buy 4 more M.2 drives just to up the capacity lol 2. Intel NIC - pfSense, etc 3. RTX 2070 - this serves two purposes - it's either running my gaming VM whenever I find time to play, or serving an unraid VM for an additional Tdarr node or testing out new things prior to implementing them on the main hypervisor 4. LSI 2308 based HBA - just connecting any of the drives that I don't have onboard connectors for 5. GTX 1650S - main hypervisor's GPU for Plex, Tdarr, and facial recognition in both nextcloud and frigate (well, until I can convince myself that I need a Coral accelerator anyway) Hope to update again sometime after thanksgiving!
  4. I had severe dns latency when using airvpn years back, though never with that level of "exactly 9s" consistency. If I'm remembering right, after pulling my freakin hair out for something like 3 days, I ended up finding ipv6 as the common underlying component... No idea "whose fault" it was - airvpn, RT-ac68u, FF [version 3 at the time], hell even .net as it was on both linux and windows machines, etc. I just found that once I squashed anything that even thought itself to be ipv6, it just... vanished. Idk if that old-as-hell "problem", whatever it was, is still around, but figured I'd mention it given youd mentioned em. Best of luck!
  5. Sounding more like it has something to do with the way either mozilla or pihole is handling the myservers plugin - I'd start systematically narrowing it down from there, it certainly doesnt sound like its something in the servers config at this point
  6. @benmandude I just realized you'd noted you *don't* have the same issue with chrome - that pretty well negates the questions posted above (well... not completely, but mostly hehehe), I really should've re-read before pulling down the HARs lol 😅 We're spending the vast majority of our wait time on DNS lookup: "cache": {}, "timings": { "blocked": 0, "dns": 8999, "connect": 1, "ssl": 0, "send": 0, "wait": 24, "receive": 0 }, "time": 9024, "_securityState": "insecure", "serverIPAddress": "192.168.0.100", "connection": "80" The exact same amount of time for both the dashboard AND main/array pages - I'd start with restarting FF in safe mode. Having this happen only in one browser narrows this down quite a bit, and while it doesn't rule out unraid itself completely, it's certainly far less likely it's something server side. At a glance, I'd venture one of your plugins is still causing issues. While I didn't spend a ton of time on it up to this point, a quick once-over show's us having issues with connecting to graphql, around the same time the my servers plugin is doing it's work. With our DNS lookups taking a full 9s, and in both instances happening right after protocol upgrade... "response": { "status": 101, "statusText": "Switching Protocols", "httpVersion": "HTTP/1.1", ... it seems more likely this is either an issue on the browser side; whether it's specific to Mozilla overall though or just one of your plugins interfering, we'll know more after your testing FF in safe mode.
  7. Awesome! I'll try to check out the HAR tonight if time allows - if not, itll likely be next weekend due to work stuffs. Quick questions - * what're you using as your router (pfsense, opnsense, asus/tplink/etc) default/model, etc)? * as for DNS - is apollo's mac and IP explicitly defined in your router, or relying upon the host and/or dhcp settings?
  8. @Darksurf just a quick update here from some time I spent on this last night - in short, it's a bit of a PITA, and requires both adding new packages, as well as updating existing ones (not just those from the nerd and dev packs, but core libraries within unraid as well). This is honestly pretty dangerous, as any updates to unraid then have a much higher risk of breaking things due to dependency hell. Just to give an idea of where it's at so far: - mkisofs missing - install cdr tools from nerd pack - bison and/or yacc required - pulled bison from Slackware 14.2 repo - augeas/libtirpc missing - libtirpc breaks nerd pack and dev pack updates (which I should've foreseen honestly 😓), augeas dependencies missing, but those can be loaded from dev pack. Eventually decided to set env variables in make so it didn't try to get packages on it's own as we don't really want that anyway... - gperf missing - self compiled and added - libmagick missing - this is part of the file tool, unraid includes 5.36... This is where I stopped. I'm going to have to re-evaluate whether I need this badly enough on the core OS to accept/take the risks associated with all of these modifications... Seems there's likely a pretty valid reason it's not been in Slackware before perhaps? For now I think I'm going to put this on pause and re-evaluate what other options are available.
  9. I'd determined I needed the same (libguestfs as a whole, specifically), and am planning to scope out the amount of work it's going to take me to do so this weekend - if it compiles without issue, I'll try to remember to check what the process is to submit it to community apps as a plugin and you can use it that way 👍 EDIT / P.S. @Darksurf - FYI, looks like someone's made a docker for this use case already: https://github.com/brandonkal/guestfs I'm wanting this to be done on the hypervisor side though, as it pretty well provides the keys to the kingdom, and I'd just rather have that kind of power restricted to the root user.
  10. @benmandude Some things thatd help troubleshoot this: * Testing on separate browser (Edge, Safari, Chromium, anything really) * When you're seeing this, does clearing the browser cache help? If this is a pain, try using a private/incognito window, should provide a relative outcome. * Connecting from another device - browser doesn't matter, just other system that's not logged in to your server's UI at that time And for the "this is everything one could possibly need to see what's happening here" version (or at least point to the next place to check logs from at the very least): * When you are encountering this, create a HAR file recording the occurrence and the share it here for analysis; this link has solid steps for doing so across multiple browser types. PLEASE take special note of the warnings section of the article - while creating the HAR recording, do NOTHING other than recreate/show the loading issue, then stop the recording (don't go logging into your bank or something silly like that lol) If you go this route, attach the file here so it can be gone through. I'm sure the limetech guys can sort it out, but if it comes in this weekend, I should have time to look at it hopefully sometime Monday and see what I can make of it.
  11. Any reason not to just use one of the icons from an app contained within the folder and using grayscale render? Or if I'm misunderstanding the question, can you clarify what you're meaning a bit?
  12. Do you have the Full Width plugin installed? Also, do you notice if this typically occurs with smaller windows (as opposed to full screen, or otherwise)? I've noticed that some pages get kinda wonky when attempting to view them in a window that's been resized smaller, or any time I'm browsing it from something that's only got something like an 800x600 resolution available to it. It's rare for me to do so, so I've just kept both plugins installed and keep it in mind any time I see something screwy like this.
  13. Hard drives fail I guess is the gist of it - it's an expected occurrence whose frequency increases with the number of drives you have attached. As we know hard drives die, it just makes sense to me to minimize the impact of addressing those failures when they do occur (taking everything down to address an inevitable eventuality doesn't seem in-line with the idea of a NAS, at least to me anyway)
  14. The broader use case here is being able to replace a failed drive without taking down everything, but instead minimizing the impact strictly to those components necessary to do so imo. Anyone that's self-hosting, whether for business or friends and family use, loathes downtime as it's a huge pain to try to schedule it at a time that's convenient for all. There are currently 4 family businesses as well as 7 friends that rely upon my servers as I've migrated them away from the google platform, and given even that small number, there are always folks accessing share links, updating their grocery lists, etc. All that stuff is on the pools. For the home nas user with no other individuals accessing their systems for anything, I could see how it wouldnt really matter. But I feel like it's not uncommon for an unraid user to have their system set up in such a way that taking everything down is necessary. As far as reboots, that's a separate thing imo - it could also be addressed in the UI by allowing a samba restart button in the even of an edit to SMB-extra, allowing function level reset to be executed from the UI instead of cli for VFIO binds, and so on. Most of these things can be done without reboots on more modern hardware, it's just not yet available in the UI. To me, this is a big logical step towards getting making things smoother for the user.
  15. +1 from me as well. I'd like to try to help justify the efforts with a quick list of use cases: Firewall/Router - As others have noted, many of us run pfSense, OPNsense, ipFire, VyOS, etc. Virtualizing the router makes some sense when running an unpaid server given we're running many applications whose sole function relies heavily upon networking, as the horsepower necessary to properly manage/monitor that traffic (traffic shaping/policing, threat detection and deep packet inspection, and so on). When trying to make the most efficient use of resources, having a separate physical machine for this isn't as cheap as buying an SBC like a pi/odroid/whatever, but can quickly add up to hundreds of dollars (not to mention the electricity efficiency lost by not having it in the same box) Auth / Domain Controllers - For self-rosters and businesses alike, LDAP, DNS, TOTP, and others are often needed. I currently run mine in a VPS, as I can't have users ability to authenticate lost simply because I need to replace or add a drive. Home Automation - While many choose to use docker containers for this, others use Home Assistant OS. Having all your automation go down every time you have to take down the array is a significant annoyance. As time goes on and home automation becomes more and more mainstream, I can see a time in the not-to-distant future where it's not abnormal to have access to your home (integrated door locks) are considered 'normal'. The impact of completely losing access to your home automation's functionality will grow as well. Mail server - I doubt many of us are running our own mail servers, but I know at least a few are. I'm willing to bet that those who are, are also running unraid as a vm themselves under proxmox/vmware/(etc), because this is something you absolutely *can't* have go down, especially for something as simple as adding storage. I'm sure there are others I'm missing, but these are the big ones. Once ZFS is Integrated, it'd be great to get some attention to this - I understand there's some complexities here, so I'll attempt to address some of them where I can with some spitballed ideas: Libvirt lives on the array; how do we ensure it's availability? Make the system share have a pool only (cache, but could be one of any number of pools) assignment requirement ('Offline VM Access - Checkbox here') in order to enable this feature. (think this is the easier method), or - Dedicated 'offline' libvirt/xml storage location - this seems like it'd be more cumbersome as well as less efficient given the need for dedicated devices The array acts as one single storage pool to all applications; how can we ensure the VM's resources will be available if we stop the array? Unassigned Devices - As @JonathanM noted, should limetech take over UD, it could be used to set this completely outside the active array, or Split the 'stop' function of the array - In conjunction with option 1 above, adding a popup when selected stating that 'capacity available for VMs will be limited to the total capacity of the assigned cache pool' or something.. but instead of a one button 'it's all down now', create two separate stop functions One for stopping the array - A notification pops up warning that all docker containers will be taken offline, similar to current popup. Then another for shutting down both the array and pools - Popup again notes impact Idk how useful any of these musings are, but it's been on my mind for a while so I figured what the heck lol. Anyway, BIG +1 for this feature request!