Caennanu

Members
  • Posts

    159
  • Joined

  • Last visited

Everything posted by Caennanu

  1. @0sense i have had similar issues and i haven't really worked it out myself (on my ryzen system, which i'm nolonger using). The thing i noticed is that when the gpu is in use by unraid itsself for primary video output, the dockers won't see it or be able to use it. With a little bit of tinkering you have an odd chance that a VM will work with it when exporting the rom and loading it in with the VM, but this for me was a hit and miss. I'm not entirely versed with intel cpu, but i believe that cpu has an IGP. Are you perhaps able to set that as primary graphics adapter in the BIOS? (you might need to hook up a monitor to it, or a dummy plug) Next to that, there is another thing that was suggested i would do, i had to turn off whql support in the bios to turn on some function that would allow first gpu detection via chipset rather than first pci-e slot, but forgot what its called. Will search for it, but maybe someone else knows it out the top of their head? -- edit: Correction, as i had to look it up. By turning on WHQL on the MSI mainboard, it turned CSM off. Which allowed me to use the graphic cards in proper order.
  2. Ahh ok gotcha. Funny thing is, i knew how to, just not why to
  3. @ich777 okay, can do that. Binding, Never really read up on that what its actually for. And good to know. But maybe its similar for Hellraiser? No no, the GT710's are nothing more than offloading basic graphics for the VM's from the cpu, while having the option to hook up a monitoring monitor to them (CCTV montage in a room near where the server is). The 1050 is for Docker containers. And well, i tagged you on that one the other day. So that works
  4. @HellraiserOSU i seem to be having the exact same issue. multi GPU system however. and i have it on one of my GT710's i use to passthrough to VM's (i have 2x GT710 and 1x GTX1050ti, the 1050 is used for transcoding in dockers. And then we have AST onboard graphics to boot up unraid). Have you found anything usefull? i'm only getting the error when trying to open up the nvidia driver plugin and the assigned VM is not started. When i start the specific VM, i get the following 'statement': vfio-pci 0000:42:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x3f00, but the VM will boot without issues and is able to use the card. @ich777 maybe this information helps you? As when i have the VM booted. The driver plugin powers up fine and i can open it. however it will show the 1050ti, not any of the other gpu's
  5. Right, so if someone else comes across the issue where adding the runtime=nvidia doesn't work. I've come to the conclusion that there might be something wrong in the shinobi:nvidia template provided by Spaceinvader. Instead i followed @ich777's instruction included with his nvidia driver plugin, by adding the variables manually. So. . . Under extra parameters i have added '--runtime=nvidia' I manually added the variable for NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES as described here: By doing this, the docker was succesfully able to start, and the hardware acceleration seems to work when turning it on for the different camera's. Hope this information helps someone set this up! Time for me to monitor my Memory usage.
  6. Not saying i'm unable, just don't know how, or in other words i need guidance. The official support page shows commandline configurations from a commandline directly into a linux kernel, which isn't what i'm doing with unraid. I know how i can open up the CLI, know some basic commands, but that is about it. I am currently testing zoneminder, haven't figured it out yet, as in i seem to only be getting a crapload of pictures instead of actual video's (now pictures work too . .. in a way). But i haven't really had time to dive into it just yet. Have just looked up frigate. And from the description i can already tell i'd have an issue with it. For one the detection area is using squares for region's, which is going to be absolutely tedious to set up, since i'm on a small farm, and the only thing actually being somewhat square is the driveway, and due to regulations in regards to privacy, i'm not allowed to record public roads for example. But the way camera's work, it can never be a square as the vision is always more like a cone regardless of position of a camera. And at the same time, it has no motion detection which is a must have feature for me (monitoring horses for example). So for now that's a no go.
  7. Gday all, Since the dockers from shinobi are fairly outdated or the support is lacking, i figured i would install the docker from dockerhub directly. So far so good, i got the docker up and running, can access my camera's, can set up motion detections etc. However, i'm running into issues setting up for example tensorflow or hardware accelerated en/decoding, let alone how to point the docker to a proper save location. This is ofcourse partly due to my lack of knowledge on commandline configuration, however the docker supplies a plugin manager. So the question is. How would i set these things up using the plugin manager? How do i 'change' the location of the footage? Is there perhaps a good video tutorial i can use? Hope someone can help me!
  8. Gday all, Just a quick question. I have found out already that i need to connect a monitor (or dummy plug) to a videocard in order to use it via passthrough and be able to remote into a VM that is using said gpu. But does the same apply for a GPU thats used to decode / encode for dockers, like plex and shinobi?
  9. Good day, Today installed Zoneminder docker, and ofcourse read the descriptions. It says to select half my memory for SHM size, but i can't find the reason for this? And in all honesty . . . since ill be sporting 128GB of memory soon, wouldn't 64GB of memory be a bit much?
  10. so . . . is this docker still supported? Replies to questions seem scarce . . .
  11. Correction to the above. Even with hardware acceleration disabled its using an awfull lot of memory (23gb ram currently)
  12. Good Day all, Recently upgraded my system so i van start using the Nvidia build. However i am running into two issues. 1. When adding the parameter for runtime, the command fails and docker refuses tot start. 2. Without runtime and selecting hardware acceleration the docker uses 30gb of ram. Anyone know what is wrong?
  13. alright. it seems to be working now. visuals are the same, the only difference is that i've restarted the container a couple of times. Any idea why they won't post / echo, like they did previously?
  14. Docker is set to host. That i can't assign port mapping i am aware, because its using the ports provided in the config file for the docker. But it should still read that from the port mappings field, which it did previously. Right now, since that part is empty, the docker is not reachable. Before today, it would state this when offline: and this when online But that is not happening anymore.
  15. no problem, too quick can happen when you're trying to solve something I switched it back to host, get the same issue. Field for port mapping stays empty. Even when applying a static ip.
  16. Same thing, but now the host as network. Or i'm not getting you?
  17. allright, that makes sence from that screenshot. here a more proper screenshot, where i have the log open next to the field thats should ahve the port mapping. As you can see the other dockers who are running in bridge mode have atleast 0.0.0.0:### showing, while teamspeak (not teamspeak3) has not.
  18. Gday good sirs and madams, I have also posted in the engine section, as i believe it is more related to the docker engine than the docker itsself. but figured i would 'double' post it here too incase it IS related to the docker. Yesterday i migrated systems, from ryzen to epyc. And everything was fine. This morning i have issues with the TeamSpeak docker actually getting a mapping. It says it boots just fine, but if there is no ip assigned (host, bridge or manual) its obviously unreachable. Any idea if this could be part of the docker? I would atleast expect a 0.0.0.0:#### but not blank.
  19. solved, backup ran fine when VM manager and Docker service was stopped.
  20. Good day all, Since i'm on the verge of swapping my unraid system from a ryzen 7 to an epyc system, i'm trying to create a backup of my unraid flash drive. Because you never know whats going to happen. And have made a backup before. However, today when trying to create a backup from the main page > flash > backup. I'm getting the following error message in my logs. But i don't really know what they mean. Can any1 help so that i can make a backup this way, instead of copying all the contents from the USB drive?
  21. right, so during boot it will select which devices it can allocate and 'dedicate' lanes to them. If it can't allocate resources it simply won't see it and thus it won't show up at all to be used. That makes sence. Than the only conclusions i could make are that i have a defective controller or disk. darnit Luckily the Epyc system will have onboard controllers, so i can atleast test that bit.
  22. well yes, so when all are bussy and you try to write something and thus claim a lane to write it on, it won't, and so you get errors? The only other options i can think off . . . are defective controller or disk. But since i only seem to experience this issue when i'm adding things (be it hardware additions or more I/O)
  23. Good day all, Currently i'm (still) running a ryzen 7 1700 as my unraid. My Epyc system is in the mail, but wanted to double check. Recently have the issue that my parity disk keeps having errors and thus failing. Since i experienced this issue before when adding more pci-e cards, i'm thinking it is related to this. Currently i have a GT710 (x1 card) GTX1050ti (x16 card) a SAS controller (x4 card) and a NIC (x1 card) installed in the system. Together, this would mean i'm using 22 pci-e lanes, where the ryzen system only has 20 for pci-e slots and 4 are used for the chipset. Together these cards would overutilize the max with by 2, and since i'm also running an nvme drive on these, i would theoretically sit at 26 pci-e lanes, and thus overutilize it all by 6. Setting the 1050 to x8 (since i really don't need all 16 lanes for encoding CCTV) doesn't really solve the issue tho. Can overutilization of pci-e lanes cause disks to fail?
  24. Alright, makes sence. Thanks for the reply!