jfeeser

Members
  • Posts

    74
  • Joined

  • Last visited

Posts posted by jfeeser

  1. Hi All, having a bit of trouble with this container...it looks like just about hourly the crashplan instance in the container tries to update itself, and creates a temp file to do so, but then the update fails.  That's fine by me, since it's still working, but it leaves the temp update files behind, so after half a day I end up with gigs upon gigs of files like "c42.12976608722313913967.dl" in the /conf/tmp folder of the crashplan container. Is there any way to prevent this from happening, other than setting up a Cron job to delete the contents of the folder?  Thanks in advance for your help!

  2. Hi all, this morning I woke up to this in my inbox:

     

    Event: USB flash drive failureEvent: USB flash drive failure
    Subject: Alert [FEEZFILESERV] - USB drive is not read-write
    Description: USB_DISK (sda)
    Importance: alert
     

    I logged into the server and the flash drive is showing "green", and the server appears to be running fine (I remember reading somewhere that the entire OS loads into RAM so that makes sense).  I'm also not surprised the USB stick is failing since it's ages old and wasn't exactly a high quality one to begin with.  

     

    So I ask:  What do I do at this point?  How do I go about replacing the USB drive?  I've attached diagnostic logs for more info for people smarter than me.  Thanks in advance everyone!

    feezfileserv-diagnostics-20220327-0933.zip

  3. I just figured it out!  It turns out that at least in my case, it was Valheim+ doing it.  i had it turned on on the server side when the clients didn't have it installed.  For some reason the last version didn't care about that, but the new one very much does.  I installed the mod and it worked without a hitch.  I'm assuming setting that variable to False on the server side would also fix it.

    • Like 1
  4. 1 hour ago, Beng8686 said:

    I just wanted to pop in a say THANK YOU ich777 for all of your hard work. I had some issues getting it to update to 0.146.11 but got my answer reading here. Everything is working GREAT!

    What ended up working for you for getting the update?  i've tried turning firewalls off, disabling pihole, everything short of rebuilding the container and so far nothing has worked.

  5. Understood.  So ideally i would just bond everything and then use VLANs to separate traffic.  

    That being said, any idea why the bond is stuck in "active/backup" mode?

     

    (also love your nick, it's "nicely inconspicuous")

  6. Right, the intent of that way i have it would be for eth0 (which is on the motherboard) to be the primary interface for management and docker functions, and then eth1-4 to be bonded for all other functions (file access, primarily).  Would that be the correct way to accomplish this?

  7. Hi all, i'm experiencing some odd behavior on my unraid server while trying to set up link aggregation.  The short version is that when i enable bonding for eth1-4 (4 interfaces on a 4-port addon card) the only bonding mode i can choose is "active-backup".  If i choose anything else and hit apply (such as 802.3AD, which is what i actually want to use), it just flips back to active-backup.  I've got the 4 ports that they are plugged into set up as "aggregate" on my unifi switch, but the mode refuses to change.  Can't seem to figure it out.  Attached are screenshots of my configurations, can anyone take a look?  Thanks!

     

     

    ethernet1.JPG

    ethernet2.JPG

    routing.JPG

  8. I'll double-check this but i'm not certain it's a power issue.  It's a 1000w power supply (Overkill i know but i had it laying around) going to a backplane with 5 power inputs, i have 3 on one line and 2 on another.  When the parity was initially having issues i swapped it to a location that would've been powered by the line it wasn't initially on and the issue persisted.

  9. Hi all, wanted to reach out about a recurring problem i've had with my server.  Occasionally i'll get Parity disk failures that show with the disk having *trillions* of reads and writes (see attached).  Previously i would stop the array, remove the parity drive, start the array, stop it again, re-add it, and the parity rebuild would work without an issue.  A couple months later, the same thing would happen.  

     

    Fast forward to this week, it happened again, and i thought "okay this drive is probably on it's way out".  I hopped off to best buy, grabbed a new drive, popped it in, precleared it (which went through without issue), and added it to the array as the new parity.  During the parity rebuild, the exact same thing happened with the brand new drive.

     

    Previously i've tried moving the drive to another bay in the chassis (it's a supermicro 24-bay) but it doesn't seem to make a difference.

     

    Has anyone seen this before?  What are the next troubleshooting steps?  The attached screenshot is for the brand new drive.  I've also included a diagnostic packet.

    Capture.PNG

    feezfileserv-diagnostics-20201230-1037.zip

  10. Currently 18, but i'm actually looking to size that count down, as it's a mix of 10TB drives all the way down to 3TB.  (It's in a 4U, 24-bay chassis so i got lazy and never "sized up" any drives, when i ran out of space i just added another one). I'm looking to eventually (in my copious free time) take the stuff on the 3's, move it to the free space on the 10s, and remove the 3's from the array.

  11. Hi all, currently i’m running two separate servers, both Unraid, one for docker/VMs, and one for fileserving only. Specs below:

     

    Appserver:

    Motherboard: Supermicro x9DRi-LN4+

    CPU: 2x Xeon E5-2650 v2 (32 cores total at 2.60 ghz)

    Ram: 64 GB DDR3

    Running about 20 docker containers (plex, *arr stack, monitoring, pihole, calibre, homeassistant, etc.) and 3 VMs

     

    Fileserver:

    Motherboard: Gigabyte Z97X-Gaming7

    CPU: Core i5-4690k (4 cores @ 3.50 ghz)

    RAM: 16GB DDR3

    Running minimal dockers for backup/syncing, etc

     

    Hard drive space is kindof irrelevant, as i’ve got plenty of it. The original two-server design came from me not wanting to put all of my eggs in one basket, and having the hardware to do so. Now however, i’m wondering if it’s easier/more efficient for me to take the motherboard/processor/ram from the App server, move it to the file server, and migrate everything that was running on the appserver to run on the fileserver as well, and just have everything in one box.

     

    If this were your stuff, what would you guys do?

  12. Hi all, i've been trying to use this docker in my existing setup with the rest of my content stack, but i'm running into some issues.  Is it possible to have the docker running on my application server with the library on a separate unraid box that serves as my fileserver.  If i use unassigned devices to map the share as NFS, after a while i get "stale file handle" errors when accessing the books.  If i map it as SMB (with the docker looking at the share as "Slave R/W") i get errors that the databse is locked.  

     

    If i run everything local to the application server and keep the library in the docker, everything works fine, but this isn't ideal for me as the app server is pretty lightweight and i'd rather keep the files on the file server where they belong.  Any thoughts, all?

  13. Interesting.  What would go on the unassigned device?  the VM files, or the dockers?  Logistically, does it make a difference, so long as they're separate?  Alternatively, should i divvy up the "downloader" dockers on the one SSD and keep all of the "static" dockers on the other?  Just trying to figure out the best path forward, thanks again for the advice :)

  14. Hi all, i'm building my second UNRaid server separate from my existing fileserver that'll basically only exist to run applications, both docker and VM.  (Two windows VMs, and the usual "plex stack" on the docker side.  It's currently running with just a single internal 500gb spinning disk, and i'm weighing options to speed it up as i appear particluarly I/O bound.  

     

    I'm planning on buying 2 500gb SSDs, and i'm looking for the best way to configure the three drives.  I was thinking of leaving the one 500gb as the "array" (with no parity) and using the two SSDs as a cache pool, then setting the VM and Docker stores to be "cache only".  Is this the best way to go, or is there an easy way at this point to run an "all flash" array?  (Some googling lead me to some threads saying this isn't really possible at the moment.)

     

    Thanks in advance for any advice you all can give!

  15. Apologies if this has already been asked, but is it possible to somehow make collapsible groups of related containers in the UI (or some addon that would get me this functinoality?)  I'd love to have groups such as:

     

    -monitoring

     ---telegraf

     ---influx

     ---grafana

    -games

     ---steamcache

     ---minecraft

     

    that kindof thing, instead of one long list of 30-some containers.  That way i can categorize and collapse the categories - make the whole thing neater.  Is anything like this possible?

    • Like 1
  16. Hi all - just a quick issue i'm having with my containers that i'm hoping to get some help with.  

     

    I've got two unraid servers, "fileserver" and "appserver", the former running 6.8.1 and the latter running 6.8.2 (through the NVIDIA plugin).  My series, movie, and book folders are publicly exported via NFS on the fileserver and mounted on the appserver via the unassigned devices plugin.

     

    The issue i'm having is when it comes to docker containers - i'm running linuxserver.io containers for sonarr, lazylibrarian, plex, and radarr.  About once a day, the Radarr and Lazylibrarian containers lose their connection to the fileserver, citing a "Stale File Handle".  However, the sonarr and plex containers never lose their connection.  If i restart the containers with the error, all is fine again.

     

    Any idea where to start diagnosing this issue?  Thanks!

  17. Fair enough, i'll just have to tell my OCD to shut up.  Either that or do the same thing with _another_ drive to get a new valid parity 1.  

     

    Good to know about the no-downtime during the rebuild, i could've sworn i read somewhere that data drives could be rebuilt with the array active but parity drives couldn't.  Happy to know i'm wrong about that one!  I'll probably still go with the "two parity" method just to keep having a valid existing parity while the new one builds.  Thanks for the ultra-fast response!

     

    (off to start a new thread about some docker issues i'm having...see you there!) :)

  18. Hi all, i'm getting ready to do some upgrades to my array (thanks best buy for putting easystores on sale!), and as part of this i need to upgrade my parity drive to accomodate the new drive size.  I'd like to do this with the least amount of downtime as possible, as both my wife and i use the array for our businesses.  Is this procedure i found on reddit still the best one for that, seeing as i have open bays on my server?

     

    1) Stop the array

    2) Put the new drive into the server

    3) Assign the new drive as "parity 2"

    4) Start the array, allow parity to build on Parity 2

    5) Once parity build is complete, stop the array

    6) Remove old parity drive

    7) Unassign Parity 1 in the array.

    8 Start Array

     

    Will this procedure still work?  I'd rather do a "hot" build of another parity than have my array down for an entire day while building a new parity (not to mention it's a ~50tb array so flying without a net for that long makes me nervous!)

     

    Also, more of an OCD question, but once the new parity is built and the old parity is removed, can i reassign parity 2 to be parity 1?  Having a parity 2 and no parity 1 would probably cause my OCD to flare up :)

     

    Thanks in advance!

     

  19. Not sure if you ever got this done, but if anyone else finds this topic, here's how i did it:

     

    1)  Stop the VM in ESXI

    2)  Export the VM as an OVF template

    3)  Make a folder on your unraid box called /mnt/user/domains/<NameOfVM>

    4)  Copy the VMDK file from the export folder to the folder you created in step 3

    5)  Run the following command:  "qemu-img convert -p -f vmdk -O raw <vmdkfile> <vmdkfilename>.img".  This will convert the file to the KVM/OVirt format. 

    6)  Create a new VM, change the bios to "SeaBIOS", and choose the .img file created in step #5 for the first hard drive.

     

    At this point, if it's a linux machine, you can boot it and it pretty much Just Works (tm).  If it's a windows box, you've got a couple more steps.

     

    7)  Boot the windows box, let it freak out that there is a bunch of new hardware and attempt to install drivers for it.  Let it do it's thing - it'll probably reboot a couple times.

    8 ) go to add/remove programs and uninstall vmware tools

    9)  As part of the creation process, you'll end up with a D (or first available) letter drive with the OVirt client VM files (basically vmware client for OVirt).  Open that up, go to the client install folder and install it.  Reboot.

    10) after the reboot, go to device manager and install drivers for anything that wasn't detected properly.  All the drivers you need should also be on that D drive disk.  

    11)  Reboot one last time and you should be good to go!

     

    That's pretty much it.  The only other snag i noticed is that a couple VMs that i converted that had static IPs flipped back over to DHCP (my assumption is because of the change in virtual network hardware), so make sure to check that.  Let me know if you (or anyone else) runs into any issues!

    • Like 3
    • Thanks 2
  20. Working really well!  It's been humming along without a hitch pretty much since i last posted, and since then i've added a _second_ unraid server that handles all of my docker/virtualization duties.  Working on moving away from ESXI and over to that and setting up a good workflow there.  All in all, really happy with how everything is turning out!

  21. Had to buy everything, but considering after looking at the processor vs. my current one, it's an upgrade anyway.  Snagged 3 of the LSI cards in a separate deal to replace the ones in the server, and i'm off to the races.  Should all be in next week!  :)