sminker

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by sminker

  1. 6.8.3 to 6.9 went fine on this end. Just, shutdown dockers and disabled autostart. Backed up flash and updated. 5 minutes later I was up and running. Original Intel igpu passthrough to VMs seems fine. Plex can use it no issue. Any reason to change to the new method for Intel igpu passthrough? Besides cleaning up the go file? Edit: Minor panic attack after I posted this lol. Zigbee2mqtt docker wasnt starting. Couldnt find device plus multiple other errors in the logs. I just unplugged the usb stick (CC2531 dongle), waited about 30 seconds and plugged it back in. Docker started like a champ and Home Assistant started receiving updates. Just sayin, that would of been miserable if I lost that docker. The rest of my night would of been shot with re-adding a ton of devices and resetting entity ID tags lol.
  2. Happy to report this is going very smooth. Moved all data from the 4TB drives, turned auto-start off all non-essential dockers and VMs, shut down unraid, removed drives, added new drive. Turned server back on, clicked on new config. Assigned drives to appropriate slots and formatted the old parity. Parity rebuild is currently underway on the new drive. Shares, Home Assistant and essential dockers are currently running with no issues. I did leave 1 of the 4TB drives in. The blue iris i9-9900k system only has room for two HDDs (dang gaming cases). 8TB should be enough for 6 cameras.
  3. Just kidding on that one. Between all the crap Ive messed with on Unraid and having a pretty hefty/robust Home Assistant setup (lots of yaml and code work) I feel pretty confident I can handle just about anything that pops up. This is just the first time doing this many drives at once. User guide wasnt specific in what you can do at once. Had the feeling, just needing confirmation. The user guide has a lot of good info for issues and these forums can pretty much answer most questions with searching. And I wont forget the Logs just in case. Seen too many threads on here about people forgetting or not knowing to grab those.
  4. Thanks. Yeah Im pretty confident with parity being down while it rebuilds. The drives are solid. I have never had a sudden shutdown (UPS backup since day one), no Smart errors on any drive, all were pre-cleared before use and my once every 3 months parity check has never shown an error. I plan on turning off all VMs (except Home Assistant) which is on an unassigned SSD drive and only keeping dockers running that pertain to Home Assistant. Array will be seeing very little use while its rebuilding. Backup strategy: slightly panic, download logs and post here, use google And DONT shut down unraid or touch anything until I get confirmation of next step.
  5. Been using Unraid for a while. Pretty dang familiar with it. Ive replaced parity before and added old disk to the array but this is the first time for this procedure. I bought a new 14Tb easy store (currently pre-clearing). Heres what I want to do. Current array. 1x 10TB Parity 2x 10TB Array 3x 4TB Array Replace 10TB Parity with new 14TB drive. Remove (3) 4TB Drives Put previous 10TB parity into the array Can I do all this at once since Parity has to rebuild anyway? I would stop the array, shut down the system, remove and add drives as outlined. Start unraid and setup as new config. I would of course use unbalance to remove all the data to the drives that are staying in the array and make sure to set the shares appropriately with the proper "Include" settings. Was planning on just putting the 14Tb in my win10 machine for Blue Iris, but thought, what the heck. I should just throw it into the server and use the (3) 4TB drives in a windows data pool for Blue Iris.
  6. Another one over here. New CX500 500GB drive. About three weeks old. Getting the error since installing. I use it for my plex appdata (got tired of the mover taking forever when upgrading cache drives). This is going to make me avoid Cruicial SSDs for a while. Never had an error with any of my samsungs, SPs, or Sandisks. Some are 3-4 years old.
  7. Is your downloader (torrent) still working on a file? Wont move until its done.
  8. Turn every share except system, appdata and domains to YES. Make sure appdata, domains and system are set to PREFER. Run mover.
  9. You can do this without using losing data and creating a btrfs cache pool. You can move everything off the cache drive, reformat, create the pool, then put everything back. Its rather easy.
  10. Check it out. This is what you need to do.
  11. Log issues with SABnzbd? Ive looked everywhere and cant find the spot to change logging size and google hasnt been very helpful. Im sure its staring me right in the face. Edit: I did the extra parameter for setting the maximum logging. It fixed the size. But for just more knowledge, what setting could I of changed in the actual docker gui or command prompt to setup the app properly.
  12. You can browse the cache to see whats on there. Do a manual Move operation. After its done click on the little file button on the far right of the cache drive from the "Main" tab. There should really only be appdata, domains and system on there. If theres more go that share, set it to "yes" for cache. Invoke the mover again and it should move it off the cache drive. Once complete change the share back to "No".
  13. Make sure the setting for that share directory is set to use ALL disks. Default when adding a share is ALL, but if you might of changed it messing around in the beginning theres a chance it is set to use Disk 1 only. Doesnt hurt to double check. It wont start filling the 2TB unless its told to use that disk.
  14. Thanks for that. Still not giving readings for the nvme drive. Ive tried /nvme*, /nvme0*, and its direct location in the dev folder /nvme0n1. Anyone have anyother options I can try.
  15. HHDTemp docker question. I need to get the temps from nvme0 (m.2 cache). I get all the sd** just fine in grafana. Assuming i need to add another parameter to the docker. Can I just put a comma? EX: -q -d -F /dev/sd*, -q -d -F /dev/nvme* Or should I make another HDDTEMP_ARGS variable and duplicate the command with the /nvme* at the end instead of the /sd*?
  16. Assuming it would probably be a good idea to run parity check after Im done moving everything?
  17. I knew it would never be 100mb due to parity. Figured 40mb was a little on the low side though. I wasnt very organized when I setup unraid originally. Finally getting around to keeping certain shares on certain drives instead of all over the place. All the small ones are no big deal at 40mb, but tonight Im moving multiple TB of movie files to 2 different drives. Hoping to speed it up at least a little to 60mb/sec.
  18. Thats what i meant. Array disk to another array disk.
  19. Seems like this 40mb/sec is my norm. Ive done some transfers and it always seems level off at this. It starts at 100+ then after a few minutes levels off. Newer system. i7-7700k, all WD reds, 32gb ram, etc.. This is with transfering large movie files or small files. Doesnt seem to make a difference. Turbo write is disabled which doesnt really help with disk 2 disk anyway. This is a transfer from an existing disk to a new 4TB WD Red. tower-diagnostics-20190120-0906.zip
  20. Eyeballing actual servers. Tons of options out there. Plan on running 3-4VM (win 10 pro, win server 2016, ubuntu server x 2). Main use is downloading and streaming to multiple devices. A little plex use, most of my streamers are direct stream. Plex would only be when away in the RV or friends houses. Currently run nextcloud, sonarr, radarr, grafana, influx, delugevpn and a couple others. Could I get away with less like 2x E5-2650-2680v1s? Need it to be future proof. Wife probably wont allow another purchase anytime soon. Plan on getting one with the H310 so i can flash to IT mode. Any help is greatly appreciated.
  21. I think I figured out what you meant. I just deleted the docker and reinstalled. All settings were still there. I updated the conf file with the "right IP this time" and it runs just fine.
  22. If you know where that file would be located it would help. It’s not in app data or anywhere I can find. I even had krusader to a search and nothing came up. I had to bash into the running container to access it originally. Telegraf is easy. It’s conf file is in app data.
  23. I messed up a setting on Influxdb. Of course docker will not start now. I typed in the wrong IP to enable UDP to monitor my proxmox server This file specifically. /etc/influxdb/influxdb.conf Any help would be great. Ive done alot to setup multiple grafana graphs from different sources. I dont want to start over. Thanks in advance.
  24. I did attach a pic up to see how much data it was taking. Im doing a test now. I have a Win10Pro VM on unraid. I setup my torguard VPN client and tranmission torrent client. I have the radarr docker pointing to transmission. See if I have the same issues with this.
  25. So, recently upgraded my network using pfSense as my wall. After digging through some reports I noticed that the "Tower" was constantly using data. I shut down VMs and one docker at a time until it stopped. Well it was DulugeVPN using non-stop data. I was wondering why I was flying through my 1TB of data the last 3 months. This thing is chugging data. Not sure why?? Maybe to help keep the VPN tunnel open? Any help would be appreciated. An no, there was nothing being shared. All torrents get removed very shortly after completion. IMG_0451.HEIC