jfeeser

Members
  • Posts

    74
  • Joined

  • Last visited

1 Follower

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jfeeser's Achievements

Rookie

Rookie (2/14)

7

Reputation

  1. Hi Djoss, I'm running the latest version of the container and I'm still running into this issue. Any idea what could be going on?
  2. Hi All, having a bit of trouble with this container...it looks like just about hourly the crashplan instance in the container tries to update itself, and creates a temp file to do so, but then the update fails. That's fine by me, since it's still working, but it leaves the temp update files behind, so after half a day I end up with gigs upon gigs of files like "c42.12976608722313913967.dl" in the /conf/tmp folder of the crashplan container. Is there any way to prevent this from happening, other than setting up a Cron job to delete the contents of the folder? Thanks in advance for your help!
  3. Hi all, this morning I woke up to this in my inbox: Event: USB flash drive failureEvent: USB flash drive failure Subject: Alert [FEEZFILESERV] - USB drive is not read-write Description: USB_DISK (sda) Importance: alert I logged into the server and the flash drive is showing "green", and the server appears to be running fine (I remember reading somewhere that the entire OS loads into RAM so that makes sense). I'm also not surprised the USB stick is failing since it's ages old and wasn't exactly a high quality one to begin with. So I ask: What do I do at this point? How do I go about replacing the USB drive? I've attached diagnostic logs for more info for people smarter than me. Thanks in advance everyone! feezfileserv-diagnostics-20220327-0933.zip
  4. I just figured it out! It turns out that at least in my case, it was Valheim+ doing it. i had it turned on on the server side when the clients didn't have it installed. For some reason the last version didn't care about that, but the new one very much does. I installed the mod and it worked without a hitch. I'm assuming setting that variable to False on the server side would also fix it.
  5. What ended up working for you for getting the update? i've tried turning firewalls off, disabling pihole, everything short of rebuilding the container and so far nothing has worked.
  6. Understood. So ideally i would just bond everything and then use VLANs to separate traffic. That being said, any idea why the bond is stuck in "active/backup" mode? (also love your nick, it's "nicely inconspicuous")
  7. Right, the intent of that way i have it would be for eth0 (which is on the motherboard) to be the primary interface for management and docker functions, and then eth1-4 to be bonded for all other functions (file access, primarily). Would that be the correct way to accomplish this?
  8. Hi all, i'm experiencing some odd behavior on my unraid server while trying to set up link aggregation. The short version is that when i enable bonding for eth1-4 (4 interfaces on a 4-port addon card) the only bonding mode i can choose is "active-backup". If i choose anything else and hit apply (such as 802.3AD, which is what i actually want to use), it just flips back to active-backup. I've got the 4 ports that they are plugged into set up as "aggregate" on my unifi switch, but the mode refuses to change. Can't seem to figure it out. Attached are screenshots of my configurations, can anyone take a look? Thanks!
  9. Good point. I just double-checked and it's actually 6 connections back to the PSU, 4 are plugged into one line and 3 into the other - maybe switching it to 3 and 3 will help.
  10. I'll double-check this but i'm not certain it's a power issue. It's a 1000w power supply (Overkill i know but i had it laying around) going to a backplane with 5 power inputs, i have 3 on one line and 2 on another. When the parity was initially having issues i swapped it to a location that would've been powered by the line it wasn't initially on and the issue persisted.
  11. Hi all, wanted to reach out about a recurring problem i've had with my server. Occasionally i'll get Parity disk failures that show with the disk having *trillions* of reads and writes (see attached). Previously i would stop the array, remove the parity drive, start the array, stop it again, re-add it, and the parity rebuild would work without an issue. A couple months later, the same thing would happen. Fast forward to this week, it happened again, and i thought "okay this drive is probably on it's way out". I hopped off to best buy, grabbed a new drive, popped it in, precleared it (which went through without issue), and added it to the array as the new parity. During the parity rebuild, the exact same thing happened with the brand new drive. Previously i've tried moving the drive to another bay in the chassis (it's a supermicro 24-bay) but it doesn't seem to make a difference. Has anyone seen this before? What are the next troubleshooting steps? The attached screenshot is for the brand new drive. I've also included a diagnostic packet. feezfileserv-diagnostics-20201230-1037.zip
  12. Currently 18, but i'm actually looking to size that count down, as it's a mix of 10TB drives all the way down to 3TB. (It's in a 4U, 24-bay chassis so i got lazy and never "sized up" any drives, when i ran out of space i just added another one). I'm looking to eventually (in my copious free time) take the stuff on the 3's, move it to the free space on the 10s, and remove the 3's from the array.
  13. Hi all, currently i’m running two separate servers, both Unraid, one for docker/VMs, and one for fileserving only. Specs below: Appserver: Motherboard: Supermicro x9DRi-LN4+ CPU: 2x Xeon E5-2650 v2 (32 cores total at 2.60 ghz) Ram: 64 GB DDR3 Running about 20 docker containers (plex, *arr stack, monitoring, pihole, calibre, homeassistant, etc.) and 3 VMs Fileserver: Motherboard: Gigabyte Z97X-Gaming7 CPU: Core i5-4690k (4 cores @ 3.50 ghz) RAM: 16GB DDR3 Running minimal dockers for backup/syncing, etc Hard drive space is kindof irrelevant, as i’ve got plenty of it. The original two-server design came from me not wanting to put all of my eggs in one basket, and having the hardware to do so. Now however, i’m wondering if it’s easier/more efficient for me to take the motherboard/processor/ram from the App server, move it to the file server, and migrate everything that was running on the appserver to run on the fileserver as well, and just have everything in one box. If this were your stuff, what would you guys do?
  14. Hi all, i've been trying to use this docker in my existing setup with the rest of my content stack, but i'm running into some issues. Is it possible to have the docker running on my application server with the library on a separate unraid box that serves as my fileserver. If i use unassigned devices to map the share as NFS, after a while i get "stale file handle" errors when accessing the books. If i map it as SMB (with the docker looking at the share as "Slave R/W") i get errors that the databse is locked. If i run everything local to the application server and keep the library in the docker, everything works fine, but this isn't ideal for me as the app server is pretty lightweight and i'd rather keep the files on the file server where they belong. Any thoughts, all?