Jump to content

CraziFuzzy

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by CraziFuzzy

  1. What is the current state of this container? It appears that @coppit is no longer maintaining it, so has anyone forked it and make any updates? I noticed that Archive Search is no longer functioning, and my Motion Detector is now complaining about not having the a new enough library. My guess is this is a dependency that just needs to be updated in the dockerfile, but I'm not sure.
  2. quick update - it seems the errors I was seeing were due to folder permissions. For some reason the folders that the containers create don't get set with permissions that those same containers can access. chmod 777'ing them got the containers to all start up.. I think.. so far... now to actually learn how to USE graylog...
  3. So, just tried the above linked compose method and ended up with the same errors as @GTvert90 . I don't feel that this particular service should be so strangely difficult to get running.
  4. I also have been fighting with trying to get graylog working in an Unraid docker. Seems to be a combinations of many problems with how ALL the current dockers seem to be built. For starters, you need other services up and running for Graylog to work. That being elasticsearch and mongodb. I have not gotten any of the three docker containers to actually install and 'just work'. File permissions are trying to run as root, instead of nobody, on all the containers. There also seem to be some version restrictions betweeh what version of graylog you use and what version of elasticsearch works with it. Not sure if that is the only problems, but it seems like it may be worth trying to create a container that just includes all three services rolled up together - unfortunatly, I don't know enough about any of them to take this on.
  5. Just doing a major network topology upgrade to the house - and decided to split out my dockers... really wish DHCP worked for the docker containers... I mean.. something creates the MAC address and the IP address for them - should be able to proxy in the DHCP request as well - right? An 'outside the container' service perhaps, optional so that it doesn't need to interfere with containers that DO handle their own DHCP. In the meantime - lots of double work statically assigning container IPs and duplicating the settings in the router's DHCP Server.
  6. Okay, thanks. Any chance of getting this noted in the docker/CA database?
  7. Is anyone able to use minio out of the box on their unraid install right now? It seems they (minio) are saying that it won't work on unraid's array (/mnt/user) shares as unraid's shfs doesn't support O_Direct. I have tested this by mapping it to a single /mnt/disk1 so it is hitting the xfs directly, but this limits me to a single drive. It looks like minio does support adding multiple drives with some config file editing, but it also seems like they expect to be the only things on the drives for them to spread space properly. I don't THINK for my use that being limited to a single drive will be a problem, but I am wondering if anyone has come up with a more elegant solution.
  8. So, I know that the ability to limit a share's reported free space has been a request for a while, and I know it is unlikely. I wonder, however, if it would be possible to create a vdisk or similar disk image, stored on the parity array, and mount IT as a share? This would them present anything accessing it as only having the space the image has - so applications that limit their disk use to available space would still obey that limitation. I just don't know for sure if this is readily possible on unraid.
  9. You might go with a networked trigger from in xeoma instead of running a local command. I think it's call HTTP Request sender, sending requests to HA's http sensor.
  10. Yes, I did state that originally it did not have a powered hub, and I thought that might be the issue, so i replaced it with a powered hub, and the same symptoms persist.
  11. Yes - I know the recommendation is to disable sleep on the VM, and I do understand the reasoning for this. I'm curious about what can be done to enable it for one reason - Windows (and other) background processes that do work when the computer is awake, but do nothing when the computer is asleep. Here's an observation I've made with the one windows 10 vm I've got set up. It's passing through a gpu and a usb controller. I can put windows to sleep, and windows does it's thing and goes to sleep. the emulator then recognizes this and halts the vm. I believe this halt is unnecessary. I can resume the VM from the unraid GUI, and it appears that windows is still asleep - monitor is still off, and judging by cpu load on pinned cpu's, background processes are stoppe, until I actually wake it with the mouse. It then wakes up and is sitting at the login screen - normal windows behavior. I wonder if it would be possible to have the VM ignore the sleep command from windows, keep the emulator running, so that it can wake up properly on local (mouse of keyboard) stimulus. Has anyone played around with the various sleep states? Does s1 maybe not halt the vm for instance?
  12. Oh yeah.. another weird symptom. Once the server boots back up, Enable VMs is set to No. I'm assuming this is some safety factor or something - not sure if that helps point to the source of the problem.
  13. I run my daughter's desktop as a Windows 10 VM on my UnRAID server. I passthrough the Nvidia video card, and an on-board USB controller to her VM, and it has worked fine for quite some time. I recently tried to add a webcam to the setup, connecting it to the hub she already uses for mouse and keyboard, and on starting the camera, the mouse and keyboard stopped responding - i then noticed that the unraid server was unresponsive as well. I hard reset the unraid server, and it eventually recovered. I thought it might be a power issue, as it was using an unpowered USB hub - so I grabbed a nice 3.0 powered hub, and tried it again. Same thing happens. I honestly don't even know where to look for this, to get an idea of what might be causing this.
  14. Is there any specific thing to look for to see why a device chosen in the plugin doesn't bind to vfio, and as such, is not available to be selected in a VM? So far, I have not been able to pass through anything other than my video card (and it's associated sound). After trying differnet ACS settings, the card I'm wanting to pass through is grouped with just a PCI bridge (guessing the one it is routed through), so when they are all checked, they still don't bind.
  15. Thanks Squid - I had originally thought about extending the hotplug plugin for this, so it sounds like we came to the need in the same way.
  16. Is there any possibly way to have all devices under a certain USB hub auto-passthrough to a VM? Thinking for multiple VM workstations uses, it'd be nice to have just a USB hub at that station, so that any keyboard, mouse, thumbdrive, camera, etc that is plugged into that hub gets mapped to that VM. I know you can (potentially) map a USB controller to a VM, but that means you have to have at least as many controllers as you have stations - when that's on top of video cards for each station as well, it can add up.
  17. Hmm.. and actually, it looks like there may be an issue with the version fetching in the image anyway. It is currently fetching version2.xml, which looks like they used through version 19.4.19. They now have a version3.xml with the actual 'current' version of 9.4.22.
  18. So, while I'm here, I thought I'd ask this. What is up with the timezone setting in the xeoma docker? I noticed that my currently recording archive files are from the future, and when i connected to the container, i found that the timezone is just.. wrong. For example: root@Tower:~# date Tue Dec 3 21:27:30 PST 2019 root@Tower:~# docker exec -it fbfeacf9bf2f date Wed Dec 4 05:27:34 America 2019 Is there an environment variable for this container we can set? Xeoma seems to work 'okay' with this, as the client seems to be aware of the issue, and translates times to the timezone the client is set to, but this has got to make schedules and such complicated (I don't use any, so not sure if it's a problem in practice).
  19. I think it may even be a bit stranger than that. You do not get a license for a specific version, you get a license for a specific date. Any versions that are released after your license expiration will not work. This makes it a bit tougher to 'predict' what version you can use (though, of course, sticking to the version that is current when you got your license will work, but you MAY be entitled to a later version).
  20. Is there any way to use USB cameras (or in my case, UVC capture device) with this docker? I realized after getting it installed, that I don't know if unRAID has UVC drivers installed - so not sure if I can actually get video into the Xeoma docker. If not, I may have to go with a windows VM (not the preferred method, I'm sure) to host my server.
  21. This looks great - I wonder if it could be a but improved. I am wanting to set up a virtual PC for my daughter on my server, running an HDMI and a USB to her room, where she'll have a monitor, and a powered USB hub to connect keyboard, mouse, headset, etc. My hold-up is that m motherboard has a pretty poor IOMMU arrangement, so I cannot seem to successfully pass through a USB controller to the VM. I can simply pass through the USB devices she needs (Keyboard, Mouse, Headset) via USB passthrough, but this doens't allow her to plug in thumbdrives or other devices into her USB hub. I'm wondering if it would be possible to detect when new USB devices are plugged into unRAID, and if they are beneath a defined USB hub, they are auto-hotplugged to a specified virtual machine.
  22. Is there a way to set the umask inside the container? I noticed everything is written with the default umask of 022, which makes it difficult to manipulate downloaded files using different unraid/share users.
  23. Is it possible to have some pages served unsecured with this server? I tried adding some locations to the listen 80 server, but I don't know if I truly understand how to set it up. no matter what I try, browsing to the matched uri still redirects to the secure server. server { listen 80; server_name _; root /config/publicwww; index index.html index.htm index.php; location ^~ /public { try_files $uri =404; } location / { return 301 https://$host$request_uri; } } Any suggestions on how to tackle this?
  24. CraziFuzzy

    Turbo write

    I suppose it would depend on number of drives involved, and how much faster the mover will run with turbo writes vs. traditional read/write. I mean, if you've got a minimal array of 2 data drives and 1 parity, and using turbo means spinning up 3 drives instead of 2, but those 3 are spun up for half the time, it will end up with fewer drive-hours up and running each night.
  25. CraziFuzzy

    Turbo write

    So it seems that aggressive use of a cache drive should really work well with turbo mode, correct? Those little nuisance writes could be done to the cache, and the whole array only needs to be spun up for turbo writes when the mover runs, correct? Is this what people see in practice? With that in mind, would it make sense to have it turn on turbo writes when mover starts, and turn it back off when mover is complete?
×
×
  • Create New...