• Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

1 Follower

About jfeeser

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I just figured it out! It turns out that at least in my case, it was Valheim+ doing it. i had it turned on on the server side when the clients didn't have it installed. For some reason the last version didn't care about that, but the new one very much does. I installed the mod and it worked without a hitch. I'm assuming setting that variable to False on the server side would also fix it.
  2. What ended up working for you for getting the update? i've tried turning firewalls off, disabling pihole, everything short of rebuilding the container and so far nothing has worked.
  3. Understood. So ideally i would just bond everything and then use VLANs to separate traffic. That being said, any idea why the bond is stuck in "active/backup" mode? (also love your nick, it's "nicely inconspicuous")
  4. Right, the intent of that way i have it would be for eth0 (which is on the motherboard) to be the primary interface for management and docker functions, and then eth1-4 to be bonded for all other functions (file access, primarily). Would that be the correct way to accomplish this?
  5. Hi all, i'm experiencing some odd behavior on my unraid server while trying to set up link aggregation. The short version is that when i enable bonding for eth1-4 (4 interfaces on a 4-port addon card) the only bonding mode i can choose is "active-backup". If i choose anything else and hit apply (such as 802.3AD, which is what i actually want to use), it just flips back to active-backup. I've got the 4 ports that they are plugged into set up as "aggregate" on my unifi switch, but the mode refuses to change. Can't seem to figure it out. Attached are screenshots of my configurations, can any
  6. Good point. I just double-checked and it's actually 6 connections back to the PSU, 4 are plugged into one line and 3 into the other - maybe switching it to 3 and 3 will help.
  7. I'll double-check this but i'm not certain it's a power issue. It's a 1000w power supply (Overkill i know but i had it laying around) going to a backplane with 5 power inputs, i have 3 on one line and 2 on another. When the parity was initially having issues i swapped it to a location that would've been powered by the line it wasn't initially on and the issue persisted.
  8. Hi all, wanted to reach out about a recurring problem i've had with my server. Occasionally i'll get Parity disk failures that show with the disk having *trillions* of reads and writes (see attached). Previously i would stop the array, remove the parity drive, start the array, stop it again, re-add it, and the parity rebuild would work without an issue. A couple months later, the same thing would happen. Fast forward to this week, it happened again, and i thought "okay this drive is probably on it's way out". I hopped off to best buy, grabbed a new drive, popped it in, precle
  9. Currently 18, but i'm actually looking to size that count down, as it's a mix of 10TB drives all the way down to 3TB. (It's in a 4U, 24-bay chassis so i got lazy and never "sized up" any drives, when i ran out of space i just added another one). I'm looking to eventually (in my copious free time) take the stuff on the 3's, move it to the free space on the 10s, and remove the 3's from the array.
  10. Hi all, currently i’m running two separate servers, both Unraid, one for docker/VMs, and one for fileserving only. Specs below: Appserver: Motherboard: Supermicro x9DRi-LN4+ CPU: 2x Xeon E5-2650 v2 (32 cores total at 2.60 ghz) Ram: 64 GB DDR3 Running about 20 docker containers (plex, *arr stack, monitoring, pihole, calibre, homeassistant, etc.) and 3 VMs Fileserver: Motherboard: Gigabyte Z97X-Gaming7 CPU: Core i5-4690k (4 cores @ 3.50 ghz) RAM: 16GB DDR3 Running minimal dockers for backup/syncing, etc Hard drive space
  11. Hi all, i've been trying to use this docker in my existing setup with the rest of my content stack, but i'm running into some issues. Is it possible to have the docker running on my application server with the library on a separate unraid box that serves as my fileserver. If i use unassigned devices to map the share as NFS, after a while i get "stale file handle" errors when accessing the books. If i map it as SMB (with the docker looking at the share as "Slave R/W") i get errors that the databse is locked. If i run everything local to the application server and keep the libra
  12. Interesting. What would go on the unassigned device? the VM files, or the dockers? Logistically, does it make a difference, so long as they're separate? Alternatively, should i divvy up the "downloader" dockers on the one SSD and keep all of the "static" dockers on the other? Just trying to figure out the best path forward, thanks again for the advice
  13. Hi all, i'm building my second UNRaid server separate from my existing fileserver that'll basically only exist to run applications, both docker and VM. (Two windows VMs, and the usual "plex stack" on the docker side. It's currently running with just a single internal 500gb spinning disk, and i'm weighing options to speed it up as i appear particluarly I/O bound. I'm planning on buying 2 500gb SSDs, and i'm looking for the best way to configure the three drives. I was thinking of leaving the one 500gb as the "array" (with no parity) and using the two SSDs as a cache pool, then
  14. OMG you're a star. Thanks so much!
  15. Apologies if this has already been asked, but is it possible to somehow make collapsible groups of related containers in the UI (or some addon that would get me this functinoality?) I'd love to have groups such as: -monitoring ---telegraf ---influx ---grafana -games ---steamcache ---minecraft that kindof thing, instead of one long list of 30-some containers. That way i can categorize and collapse the categories - make the whole thing neater. Is anything like this possible?