PanteraGSTK

Members
  • Content Count

    118
  • Joined

  • Last visited

Community Reputation

1 Neutral

About PanteraGSTK

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. None of those things are associated with this error. A USB device that you have (had) selected in your VM isn't available for some reason.
  2. So far, that's what I've seen (if you make any changes to the template). If you simply restart the docker, it goes through some setup, but my test camera is still showing up.
  3. That's exactly what I'm trying to do. I'm probably missing something as to how to add that variable to the docker template.
  4. How would I go about using the RESOLUTION variable for changing the VNC resolution? I've tried adding the variable manually, but it doesn't seem to be working.
  5. BIg fan of the Gigabyte boards myself, but I've used them in my desktops for many, many years without issues. Haven't had one fail yet.
  6. GTX 1050 and above will help, but getting a card that doesn't have a limit wold be better. It looks like your board can use the e5 26xx v2 series so you have lots of options. Those CPUs are cheap on ebay depending on which one you want. https://ark.intel.com/content/www/us/en/ark/products/series/78582/intel-xeon-processor-e5-v2-family.html
  7. True, it may be the containers you are using. I have close to 30 dockers, but not all are in use. Portainer is one that is there, but I don't use it unless I need to. Pulling down containers is pretty fast for me, but I'm used to this hardware. Single core clock speed could be a factor, but I'm not sure how much. I'd be interested to see how your performance is after you change your cache pool. Perhaps the RAID aspect is causing it to slow down?
  8. I have a similar setup running on two different servers. One is the main server running 2x 2670 from the same generation as your 2660s. I haven't noticed any such slowdowns and our usage is very similar with the exception that my cache drive is a single m.2 1TB drive. I do have 2x GPU that plex can use for decoding/encoding so that takes a lot off the CPU. That server also has 2x parity and 22 array drives.
  9. Not sure if this will help, but I've been using Hass.io in a VM for quite some time without issue. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='1'> <name>Hassos</name> <uuid>23cb88d4-aafe-c513-4ec4-508b8c414efa</uuid> <description>Home Assistant</description> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="1000px-Home_Assistant_Logo.svg.png" os="linux"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory>
  10. Sadly these devices typically don't get the DRM capable stuff which is why CoreELEC or something like it is the preferred OS. I had a lower powered version that ran Kodi (LibreELEC) wonderfully and I used the plexkodiconnect addon for connection to my Plex library. Worked great, but UI lagged a bit (granted this is an s905x box which is significantly slower) so I opted for a "normal" HTPC with an nvidia GPU. Then moved the HTPC to a VM since I moved my main unRAID server next to the theater. Works perfectly with the exception of the Windows stuff I have to deal with. On
  11. I've had it working in a VM for some time. It was actually really simple following the guide. https://www.reddit.com/r/homeassistant/comments/atekmq/how_to_install_hassio_on_your_unraid_server_in_a/ Here's my xml if that helps. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='8'> <name>Hass.io</name> <uuid>3f36d385-e634-94b3-7fa3-52957fe69186</uuid> <description>Home Assistant</description> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="logo-small.png" os="linux"/>
  12. Hey guys, hopefully this is the right place for this, but I would like some feedback on what I plan to move from a "real" pc over to a VM. Right now I've got a main unraid server and am using the trial for what I'm calling the backup server. Right now they are using Syncthing to back up important data from the main server. Sadly, my office closet gets a bit too hot for the main 24 drive server to sit in there any more. Even with my intake and exhaust fans the temp is just too high. So, the main server will get moved into the equipment rack upstairs where the
  13. Yes, that should be all you have to do. It worked for me moving from a much older docker to this one.
  14. I had that problem until I changed the Controller Host Name to its IP address.
  15. That's what I had been doing and it was working. Now it sort of isn't. When I create the .json I actually save it on the USG then pull it according to this. https://larskleeblatt.net/?p=36 It was working just fine. Thought I'd done something wrong in the syntax, but that wasn't it. It really seems like it's ignoring the file.