sminker

Members
  • Posts

    27
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sminker's Achievements

Noob

Noob (1/14)

1

Reputation

  1. 6.8.3 to 6.9 went fine on this end. Just, shutdown dockers and disabled autostart. Backed up flash and updated. 5 minutes later I was up and running. Original Intel igpu passthrough to VMs seems fine. Plex can use it no issue. Any reason to change to the new method for Intel igpu passthrough? Besides cleaning up the go file? Edit: Minor panic attack after I posted this lol. Zigbee2mqtt docker wasnt starting. Couldnt find device plus multiple other errors in the logs. I just unplugged the usb stick (CC2531 dongle), waited about 30 seconds and plugged it back in. Docker started like a champ and Home Assistant started receiving updates. Just sayin, that would of been miserable if I lost that docker. The rest of my night would of been shot with re-adding a ton of devices and resetting entity ID tags lol.
  2. Happy to report this is going very smooth. Moved all data from the 4TB drives, turned auto-start off all non-essential dockers and VMs, shut down unraid, removed drives, added new drive. Turned server back on, clicked on new config. Assigned drives to appropriate slots and formatted the old parity. Parity rebuild is currently underway on the new drive. Shares, Home Assistant and essential dockers are currently running with no issues. I did leave 1 of the 4TB drives in. The blue iris i9-9900k system only has room for two HDDs (dang gaming cases). 8TB should be enough for 6 cameras.
  3. Just kidding on that one. Between all the crap Ive messed with on Unraid and having a pretty hefty/robust Home Assistant setup (lots of yaml and code work) I feel pretty confident I can handle just about anything that pops up. This is just the first time doing this many drives at once. User guide wasnt specific in what you can do at once. Had the feeling, just needing confirmation. The user guide has a lot of good info for issues and these forums can pretty much answer most questions with searching. And I wont forget the Logs just in case. Seen too many threads on here about people forgetting or not knowing to grab those.
  4. Thanks. Yeah Im pretty confident with parity being down while it rebuilds. The drives are solid. I have never had a sudden shutdown (UPS backup since day one), no Smart errors on any drive, all were pre-cleared before use and my once every 3 months parity check has never shown an error. I plan on turning off all VMs (except Home Assistant) which is on an unassigned SSD drive and only keeping dockers running that pertain to Home Assistant. Array will be seeing very little use while its rebuilding. Backup strategy: slightly panic, download logs and post here, use google And DONT shut down unraid or touch anything until I get confirmation of next step.
  5. Been using Unraid for a while. Pretty dang familiar with it. Ive replaced parity before and added old disk to the array but this is the first time for this procedure. I bought a new 14Tb easy store (currently pre-clearing). Heres what I want to do. Current array. 1x 10TB Parity 2x 10TB Array 3x 4TB Array Replace 10TB Parity with new 14TB drive. Remove (3) 4TB Drives Put previous 10TB parity into the array Can I do all this at once since Parity has to rebuild anyway? I would stop the array, shut down the system, remove and add drives as outlined. Start unraid and setup as new config. I would of course use unbalance to remove all the data to the drives that are staying in the array and make sure to set the shares appropriately with the proper "Include" settings. Was planning on just putting the 14Tb in my win10 machine for Blue Iris, but thought, what the heck. I should just throw it into the server and use the (3) 4TB drives in a windows data pool for Blue Iris.
  6. Another one over here. New CX500 500GB drive. About three weeks old. Getting the error since installing. I use it for my plex appdata (got tired of the mover taking forever when upgrading cache drives). This is going to make me avoid Cruicial SSDs for a while. Never had an error with any of my samsungs, SPs, or Sandisks. Some are 3-4 years old.
  7. Is your downloader (torrent) still working on a file? Wont move until its done.
  8. Turn every share except system, appdata and domains to YES. Make sure appdata, domains and system are set to PREFER. Run mover.
  9. You can do this without using losing data and creating a btrfs cache pool. You can move everything off the cache drive, reformat, create the pool, then put everything back. Its rather easy.
  10. Check it out. This is what you need to do.
  11. Log issues with SABnzbd? Ive looked everywhere and cant find the spot to change logging size and google hasnt been very helpful. Im sure its staring me right in the face. Edit: I did the extra parameter for setting the maximum logging. It fixed the size. But for just more knowledge, what setting could I of changed in the actual docker gui or command prompt to setup the app properly.
  12. You can browse the cache to see whats on there. Do a manual Move operation. After its done click on the little file button on the far right of the cache drive from the "Main" tab. There should really only be appdata, domains and system on there. If theres more go that share, set it to "yes" for cache. Invoke the mover again and it should move it off the cache drive. Once complete change the share back to "No".
  13. Make sure the setting for that share directory is set to use ALL disks. Default when adding a share is ALL, but if you might of changed it messing around in the beginning theres a chance it is set to use Disk 1 only. Doesnt hurt to double check. It wont start filling the 2TB unless its told to use that disk.
  14. Thanks for that. Still not giving readings for the nvme drive. Ive tried /nvme*, /nvme0*, and its direct location in the dev folder /nvme0n1. Anyone have anyother options I can try.
  15. HHDTemp docker question. I need to get the temps from nvme0 (m.2 cache). I get all the sd** just fine in grafana. Assuming i need to add another parameter to the docker. Can I just put a comma? EX: -q -d -F /dev/sd*, -q -d -F /dev/nvme* Or should I make another HDDTEMP_ARGS variable and duplicate the command with the /nvme* at the end instead of the /sd*?