STGMavrick

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by STGMavrick

  1. I have used this previously to move some data off of disks. Unbalance needs exclusive read/write access so anything accessing the array needs to be shut down.
  2. I'm in the process of swapping out a lot of 7-8 year old 2TB drives with newer larger 14TB drives. I've currently got a 3U SM 12 shelf chassis + a Dell MD1200 DAS array. For energy purposes + future snapshot dedication I'd like to disconnect the remaining 2TB drives on the MD1200. What's the best/safest way to move the data off of the remaining 4 2TB drives onto the new 14TB drives & shrink my array while keeping the array running? I have a couple containers that must stay running with access to the array. Edit: I should note that it's ok to take the array down for an hour or so to do the actual "delete drives from config" to shrink it. I just can't have these containers offline for more than a day.
  3. I suppose saying NUT server reporting incorrect is a bad statement. It reports exactly what the UPS itself says is the estimated runtime. But that's awesome to hear there's a fix for it down the road!
  4. Great work so far! I have a question in regards to the customizability of the new plugin. I have a Liebert Vertiv GXT5 3000VA connected via USB to a raspberry pi 3B running nut server. 2.8.0. The generic UPS driver does give the important info (Battery charge level, UPS state, UPS runtime available). However, the UPS runtime reports incorrectly in both Nut server & Unraid nut client. From my best guess standard UPS runtime is given in seconds and the clients convert to minutes. However, the Vertiv GXT5 reports raw runtime in minutes so my unraid servers report runtime in seconds. (IE: 40mins runtime reported by the UPS shows as 40 seconds in NUT client) This makes it impossible to configure safe shutdowns based on runtime remaining. Is there anywhere in the plugin files that I can force NUT to accurately report my runtime.
  5. Damn, mine started erroring again. 11 days with no issues. After getting page errors and forcing a restart yesterday, I installed the latest release and set the option to 0 just in case. Woke up this morning to a froze vm and page errors.
  6. Done and Done! I'm also keeping the log window open to let it collect more incase the server requires a force restart.
  7. Haha, the go file was remnants from trying to manually pass through the igpu before I found the gvtg plugin. I'll remove those! I've currently selected 4 and it is the only VM using it. The other VM is an HMI that controls my septic system so there was no need to have igpu support on it. So unfortunately I can't isolate it to one system as the root cause.
  8. I found those same problem threads but this VM is on Win 10 20H2, which is the latest. Sleep settings never, screen off is Never. Sure, see attached. infrastructure-diagnostics-20210518-0351.zip
  9. I'm having issues with the Windows 10 VM Blue Iris NVR that I installed this on last night. It ran for about 16 hours problem free until today I noticed that CPU/GPU usage was reporting 0 on the cores I assigned to it. Log is full of the following gvt errors, "Infrastructure kernel: gvt: guest page write error, gpa". I wasn't able to recover the VM without having to do a full reboot of the unraid server. After the system came back up it ran for another hour before locking up and filling the log with gvt errors again and forcing a server restart to recover. There are no errors in the system log within the Windows VM. I'm hoping someone can help resolve this as it drops CPU load by a good 20% using hardware acceleration. System: Supermicro X11SCZ-F 24GB ram i7-8700k Unraid 6.9.2 Windows VM setup 4 cores assigned - 21% load with gvt 8gb allotted - 3.5gb in use cameras are decoding with intel+VPP igpu driver is the latest. During processing it's only utilizing around 35-38% of the GPU. It's the only VM utilizing gvt on this unraid.
  10. Gotcha, Thanks for the help. End solution for the storage array will be unraid backup, not the primary storage pool. I just wanted to test it as trying to copy 20TB any method other than from DAS will take infinitely longer.
  11. That got it accessible. I still need to back up some stuff (plex) from my old centos install so changed the mount point back to /Storage. So temporarily I added it as the original mount point. So as of now, I can access the network share, but I noticed unraid doesn't list it in the shares tab. Is that because it's done via the smb extras or because it's not in a /mnt/ location?
  12. I'm 2 days new to Unraid. Currently I have one unraid server that I built a few days ago running a couple windows VMs for NVR stuff. I'm in the process of converting my second bare metal Plex/media server to unraid and I'm running into an issue with ZFS. The original bare metal was running a headless Centos 7 system with a ZFS (RaidZ2) pool called Storage. After throwing in a quick dummy drive to start my array, I installed the ZFS plugin + dashboard GUI plugin. when the ZFS plugin installed it detected my existing pool and imported it. zpool status reports back my pool config and shows it's online. zfs list shows the pool, used/aval storage, and mount point. df -h lists the filesystem correctly On the Centos server I have the mountpoint as /Storage, which it picked up. In the million tabs of unraid ZFS searching I have open I read that Unraid wants everything to be in the /mnt/ so I've changed it to /mnt/Storage. No matter what I do, what video I watch (watched both of level1tech's) or what search terms I use googling I can't find the solution to what I think is probably the easiest issue to have. How do I get Unraid to recognize it, or let me add it, as a SMB share? Thank you for your time!