Jump to content

sminker

Members
  • Posts

    27
  • Joined

  • Last visited

Posts posted by sminker

  1. 6.8.3 to 6.9 went fine on this end. Just, shutdown dockers and disabled autostart. Backed up flash and updated. 5 minutes later I was up and running. Original Intel igpu passthrough to VMs seems fine. Plex can use it no issue. Any reason to change to the new method for Intel igpu passthrough? Besides cleaning up the go file?

     

    Edit: Minor panic attack after I posted this lol. Zigbee2mqtt docker wasnt starting. Couldnt find device plus multiple other errors in the logs. I just unplugged the usb stick (CC2531 dongle), waited about 30 seconds and plugged it back in. Docker started like a champ and Home Assistant started receiving updates. Just sayin, that would of been miserable if I lost that docker. The rest of my night would of been shot with re-adding a ton of devices and resetting entity ID tags lol.

  2. Happy to report this is going very smooth.

     

    Moved all data from the 4TB drives, turned auto-start off all non-essential dockers and VMs, shut down unraid, removed drives, added new drive. Turned server back on, clicked on new config. Assigned drives to appropriate slots and formatted the old parity. Parity rebuild is currently underway on the new drive. Shares, Home Assistant and essential dockers are currently running with no issues.

     

    I did leave 1 of the 4TB drives in. The blue iris i9-9900k system only has room for two HDDs (dang gaming cases). 8TB should be enough for 6 cameras.

  3. 1 minute ago, jonathanm said:

    LOL!

    Not what I was hoping to see, but if you are comfortable with that, it's your data!

    Just kidding on that one. Between all the crap Ive messed with on Unraid and having a pretty hefty/robust Home Assistant setup (lots of yaml and code work) I feel pretty confident I can handle just about anything that pops up. This is just the first time doing this many drives at once. User guide wasnt specific in what you can do at once. Had the feeling, just needing confirmation. The user guide has a lot of good info for issues and these forums can pretty much answer most questions with searching. And I wont forget the Logs just in case. Seen too many threads on here about people forgetting or not knowing to grab those. 

  4. Just now, jonathanm said:

    Sounds reasonable. On first glance I thought you were expecting the data from the removed drives to magically appear on the other drives, but on a second reading I see you know to copy the data to the drives you are keeping in the array.

     

    I'm assuming you are confident in the health of all drives, and your backup strategy (not listed here) is tested to your satisfaction.

     

    If so, yes, your method will definitely save a bunch of time over doing each operation individually.

    Thanks. Yeah Im pretty confident with parity being down while it rebuilds. The drives are solid. I have never had a sudden shutdown (UPS backup since day one), no Smart errors on any drive, all were pre-cleared before use and my once every 3 months parity check has never shown an error.

     

    I plan on turning off all VMs (except Home Assistant) which is on an unassigned SSD drive and only keeping dockers running that pertain to Home Assistant. Array will be seeing very little use while its rebuilding.

     

    Backup strategy: slightly panic, download logs and post here, use google And DONT shut down unraid or touch anything until I get confirmation of next step.

  5. Been using Unraid for a while. Pretty dang familiar with it. Ive replaced parity before and added old disk to the array but this is the first time for this procedure. I bought a new 14Tb easy store (currently pre-clearing). Heres what I want to do.

     

    Current array.

    1x 10TB Parity

    2x 10TB Array

    3x 4TB Array

     

    Replace 10TB Parity with new 14TB drive.

    Remove (3) 4TB Drives

    Put previous 10TB parity into the array

     

    Can I do all this at once since Parity has to rebuild anyway?

     

    I would stop the array, shut down the system, remove and add drives as outlined. Start unraid and setup as new config. I would of course use unbalance to remove all the data to the drives that are staying in the array and make sure to set the shares appropriately with the proper "Include" settings.

     

    Was planning on just putting the 14Tb in my win10 machine for Blue Iris, but thought, what the heck. I should just throw it into the server and use the (3) 4TB drives in a windows data pool for Blue Iris.

     

  6. Log issues with SABnzbd? Ive looked everywhere and cant find the spot to change logging size and google hasnt been very helpful. Im sure its staring me right in the face.

     

    Edit: I did the extra parameter for setting the maximum logging. It fixed the size. But for just more knowledge, what setting could I of changed in the actual docker gui or command prompt to setup the app properly.

     

    Screen Shot 2019-02-08 at 3.56.16 PM.png

  7. HHDTemp docker question.

     

    I need to get the temps from nvme0 (m.2 cache). 

     

    I get all the sd** just fine in grafana. Assuming i need to add another parameter to the docker.

     

    Can I just put a comma? 

     

    EX:  -q -d -F /dev/sd*, -q -d -F /dev/nvme*

     

    Or should I make another HDDTEMP_ARGS variable and duplicate the command with the /nvme* at the end instead of the /sd*?

    Screen Shot 2019-01-27 at 3.00.16 PM.png

×
×
  • Create New...