jfeeser

Members
  • Content Count

    70
  • Joined

  • Last visited

Everything posted by jfeeser

  1. I just figured it out! It turns out that at least in my case, it was Valheim+ doing it. i had it turned on on the server side when the clients didn't have it installed. For some reason the last version didn't care about that, but the new one very much does. I installed the mod and it worked without a hitch. I'm assuming setting that variable to False on the server side would also fix it.
  2. What ended up working for you for getting the update? i've tried turning firewalls off, disabling pihole, everything short of rebuilding the container and so far nothing has worked.
  3. Understood. So ideally i would just bond everything and then use VLANs to separate traffic. That being said, any idea why the bond is stuck in "active/backup" mode? (also love your nick, it's "nicely inconspicuous")
  4. Good point. I just double-checked and it's actually 6 connections back to the PSU, 4 are plugged into one line and 3 into the other - maybe switching it to 3 and 3 will help.
  5. I'll double-check this but i'm not certain it's a power issue. It's a 1000w power supply (Overkill i know but i had it laying around) going to a backplane with 5 power inputs, i have 3 on one line and 2 on another. When the parity was initially having issues i swapped it to a location that would've been powered by the line it wasn't initially on and the issue persisted.
  6. Hi all, wanted to reach out about a recurring problem i've had with my server. Occasionally i'll get Parity disk failures that show with the disk having *trillions* of reads and writes (see attached). Previously i would stop the array, remove the parity drive, start the array, stop it again, re-add it, and the parity rebuild would work without an issue. A couple months later, the same thing would happen. Fast forward to this week, it happened again, and i thought "okay this drive is probably on it's way out". I hopped off to best buy, grabbed a new drive, popped it in, precle
  7. Currently 18, but i'm actually looking to size that count down, as it's a mix of 10TB drives all the way down to 3TB. (It's in a 4U, 24-bay chassis so i got lazy and never "sized up" any drives, when i ran out of space i just added another one). I'm looking to eventually (in my copious free time) take the stuff on the 3's, move it to the free space on the 10s, and remove the 3's from the array.
  8. Hi all, currently i’m running two separate servers, both Unraid, one for docker/VMs, and one for fileserving only. Specs below: Appserver: Motherboard: Supermicro x9DRi-LN4+ CPU: 2x Xeon E5-2650 v2 (32 cores total at 2.60 ghz) Ram: 64 GB DDR3 Running about 20 docker containers (plex, *arr stack, monitoring, pihole, calibre, homeassistant, etc.) and 3 VMs Fileserver: Motherboard: Gigabyte Z97X-Gaming7 CPU: Core i5-4690k (4 cores @ 3.50 ghz) RAM: 16GB DDR3 Running minimal dockers for backup/syncing, etc Hard drive space
  9. Hi all, i've been trying to use this docker in my existing setup with the rest of my content stack, but i'm running into some issues. Is it possible to have the docker running on my application server with the library on a separate unraid box that serves as my fileserver. If i use unassigned devices to map the share as NFS, after a while i get "stale file handle" errors when accessing the books. If i map it as SMB (with the docker looking at the share as "Slave R/W") i get errors that the databse is locked. If i run everything local to the application server and keep the libra
  10. Interesting. What would go on the unassigned device? the VM files, or the dockers? Logistically, does it make a difference, so long as they're separate? Alternatively, should i divvy up the "downloader" dockers on the one SSD and keep all of the "static" dockers on the other? Just trying to figure out the best path forward, thanks again for the advice
  11. Hi all, i'm building my second UNRaid server separate from my existing fileserver that'll basically only exist to run applications, both docker and VM. (Two windows VMs, and the usual "plex stack" on the docker side. It's currently running with just a single internal 500gb spinning disk, and i'm weighing options to speed it up as i appear particluarly I/O bound. I'm planning on buying 2 500gb SSDs, and i'm looking for the best way to configure the three drives. I was thinking of leaving the one 500gb as the "array" (with no parity) and using the two SSDs as a cache pool, then
  12. OMG you're a star. Thanks so much!
  13. Apologies if this has already been asked, but is it possible to somehow make collapsible groups of related containers in the UI (or some addon that would get me this functinoality?) I'd love to have groups such as: -monitoring ---telegraf ---influx ---grafana -games ---steamcache ---minecraft that kindof thing, instead of one long list of 30-some containers. That way i can categorize and collapse the categories - make the whole thing neater. Is anything like this possible?
  14. Hi all - just a quick issue i'm having with my containers that i'm hoping to get some help with. I've got two unraid servers, "fileserver" and "appserver", the former running 6.8.1 and the latter running 6.8.2 (through the NVIDIA plugin). My series, movie, and book folders are publicly exported via NFS on the fileserver and mounted on the appserver via the unassigned devices plugin. The issue i'm having is when it comes to docker containers - i'm running linuxserver.io containers for sonarr, lazylibrarian, plex, and radarr. About once a day, the Radarr and Lazylibra
  15. Fair enough, i'll just have to tell my OCD to shut up. Either that or do the same thing with _another_ drive to get a new valid parity 1. Good to know about the no-downtime during the rebuild, i could've sworn i read somewhere that data drives could be rebuilt with the array active but parity drives couldn't. Happy to know i'm wrong about that one! I'll probably still go with the "two parity" method just to keep having a valid existing parity while the new one builds. Thanks for the ultra-fast response! (off to start a new thread about some docker issues i'm havin
  16. Hi all, i'm getting ready to do some upgrades to my array (thanks best buy for putting easystores on sale!), and as part of this i need to upgrade my parity drive to accomodate the new drive size. I'd like to do this with the least amount of downtime as possible, as both my wife and i use the array for our businesses. Is this procedure i found on reddit still the best one for that, seeing as i have open bays on my server? 1) Stop the array 2) Put the new drive into the server 3) Assign the new drive as "parity 2" 4) Start the array, allow parity to build on Parity
  17. Not sure if you ever got this done, but if anyone else finds this topic, here's how i did it: 1) Stop the VM in ESXI 2) Export the VM as an OVF template 3) Make a folder on your unraid box called /mnt/user/domains/<NameOfVM> 4) Copy the VMDK file from the export folder to the folder you created in step 3 5) Run the following command: "qemu-img convert -p -f vmdk -O raw <vmdkfile> <vmdkfilename>.img". This will convert the file to the KVM/OVirt format. 6) Create a new VM, change the bios to "SeaBIOS", and choose the .img file creat
  18. Working really well! It's been humming along without a hitch pretty much since i last posted, and since then i've added a _second_ unraid server that handles all of my docker/virtualization duties. Working on moving away from ESXI and over to that and setting up a good workflow there. All in all, really happy with how everything is turning out!
  19. Had to buy everything, but considering after looking at the processor vs. my current one, it's an upgrade anyway. Snagged 3 of the LSI cards in a separate deal to replace the ones in the server, and i'm off to the races. Should all be in next week!
  20. Thought you'd like to know i just pulled the trigger on this over the weekend. Believe it or not, the passmark of the CPU in the ebay server is _more than double_ the one that's in my fileserver now (a sempron 145!) I guess the benefit of "this thing is a fileserver and nothing else" eases a lot of my CPU needs
  21. Oh sound is definitely an issue. I've seen a bunch of posts about modding that case to remove the housing for the server-grade PSUs and putting in a desktop one (which is actually what i'm doing with my current one as well). Going for as close to silent as i can get, considering my home office and my rack share a room
  22. Yep - i was actually researching that after your last reply. Apparently all of the SuperMicro backplanes that end in "TQ" are just straight passthroughs, which would explain why on the back of it there's no SAS or breakout connectors - there's just straight up 24 sata ports. Which is fine, i've already got plenty of reverse breakout cables laying around.
  23. It's funny, i didn't even think to ask but that's a great idea. I figure i can transplant my existing hardware for now (which works fine but was just built on a cheapy single-core, single-thread CPU i had laying around - parity checks take literally a day and a half!)
  24. Thanks for the tip! I'll head over there. I figure even if the backplane is good and i can rip out the rest of the internals, i'm still coming out ahead. Thanks again!
  25. Looks like i can get 3 SAS9201-8I LSI for like $80 on ebay, so that at least solves the _controller_ problem. One step in the right direction!