StevenD

Community Developer
  • Posts

    1608
  • Joined

  • Days Won

    1

Everything posted by StevenD

  1. Compiled a new version with VMWare Tools 12.3.5 and tested it on unRAID 6.12.7-rc2. I also finally was able to add a status page under the Settings tab again.
  2. I finally got around to playing with that. Unfortunately, its running a really old version of VMWare Tools (10.1.5). It does work though. Ive never built a Docker image before. Maybe I'll play with it. It would likely be easier to keep upgraded, Thanks!
  3. Combine them in a single file in this order: private cert fullchain Config unraid with that single file.
  4. I dont actually do that any more. ESXi 7 allows you to pass through individual USB devices. So, my unraid flash drive is passed through to the VM and it boots directly now. However, the VMDK method worked for many years, but i would always forget to update it when I updated the flash drive. No matter what, you cannot get away from having a flash drive tied to a license. There are plenty of motherboards with on-board USB that would be fully within the server itself.
  5. unRAID notifications via Gmail still work just fine.
  6. I upgraded to -rc3 today and it failed as yours did. I found a typo in a variable and updated the plugin. Thanks!
  7. How are you installing it? I just reinstalled it through CA and it installs just fine.
  8. Since I no longer have to compile a new VMWare Tools package for each release, I added a wildcard to the plugin so that it should work with any version newer than 6.12.x, as long as it uses the v6.x kernel. So if you update to a new version, please let me know if you have any issues. I'll remove the wildcard and go back to the individual releases if there are issues. This currently works fine with 6.12.0-rc1 and -rc2. I expect it to work with any additional -rc and -release when it comes out.
  9. Updated for 6.11.0 Stable Release. Some things have changed with 6.11, so I had to do a workaround. In order for VMWare Tools to start, I had to add: ln -s /usr/lib64/libffi.so.7 /usr/lib64/libffi.so.6 This very well could have an unintended consequence, or could cause an issue with another plugin. I am going to see if I can find someone a little more knowledgeable than me to walk through my plugin and see if there is a way to make it work each time unRAID gets updated. That being said, the plugin does allow you to shutdown unRAID cleanly from within vCenter or ESXi, which is all I really care about.
  10. I somehow missed this when you first posted. I recently built two new servers with H12SSL-NTs and Epyc 7443Ps. I'm VERY happy with them.
  11. I do not have any dockers set up with their own IP. Nothing special on the ESXi side, except I pass through all of my hardware:
  12. I've been running unRAID on ESXi for around 10 years now, so there are no inherent issues in doing that. That being said, I have never seen your issue. I just looked at my config, and I have bonding and bridging off. I assume thats required to run dockers on their own IP.
  13. Updated for 6.10.0 Stable Release.
  14. Obligatory, "the Pro license is cheaper than even a single hard drive. In fact, the unRAID license is the cheapest thing in my servers."
  15. Had a bit of time today, and I managed to get open-vm-tools compiled for 6.10-rc2! I even had a bit of luck on my side, as I was actually able to get it working with the latest VMWare Tools (11.3.5) without the ioctl "errors" spamming the event log. @doron
  16. Check your power supplies. If both are plugged into the backplane, they both have to have power.
  17. No, sorry. I just don't have time right now.
  18. Ive messed around with it and cant get it to properly compile. Unfortunately, I have no time to mess with it right now.
  19. The SM 836TQ is a great chassis, and the only chassis that I use. You will want to get some SQ power supplies, if you care about noise. However, the X8DTH is ancient and inefficient. You would be better off finding a barebones chassis, and adding your own motherboard, or finding something more modern. As soon as prices and availability are better, Im dumping the rest of my X9s.
  20. Yes, the mover does run overnight. Unfortunately, I cant disable it for a week. At the end of the day, it doesn't really bother me that the parity check basically takes a week. It was problem for a while, as I was having some crashing. I had to reboot everything to fix the crash, which of course, cancels the parity check. My RTX 4000 was overheating and disappearing from the bus. I have improved cooling and solved that issue.
  21. I use the tuner plugin. I also only run it every other month. It runs from 10PM on the first Sunday of the month to 6AM the next morning, and runs for either 6 or 7 nights. Sometimes after 6 nights it has a couple of hours, or a couple of minutes. However, for some reason, with the tuner plugin, it seems to take longer than if I were to just let it run straight through. My parity checks also slowed down when I went from two HBAs for the array to one. But, I really wanted to install a GPU, so I needed to free up a slot. Now that I have switched to 40GbE networking, I can probably remove one of the 10GbE cards, install another HBA and split the array again. HBA prices (like just about everything else) has gone through the roof. I only paid like $240 for the 9400-16i.
  22. My 16TB parity array finishes in about 48 hours.
  23. Not for very long, since they are Seagate.