xorinzor

Members
  • Posts

    120
  • Joined

  • Last visited

Everything posted by xorinzor

  1. But how will creating a local file, help me in getting CA to recognize the template? This container should eventually be able to be installed on other unraid installations too.
  2. can someone please explain to me how I can get the custom template to be recognized / used by CA for both search and install of the container? This is my container: https://hub.docker.com/r/xorinzor/shoutz0r And I tried creating this template, but that didn't seem to do much. https://github.com/xorinzor/docker-templates/blob/master/xorinzor/shoutz0r.xml Haven't been able to find some kind of guide or tutorial either, so I'm pretty much guessing here based on what other people did.
  3. Using Unraid 6.6.7 I have 2 shares, "downloads" and "media". Files that have finished downloading are written to a directory, thats checked for changes by a FileBot container. It then renames and moves the files to their respective directory in the media share. I used to have caching enabled for the media share, but recently decided it was better to have these files written to the array directly. For some reason however, FileBot still leaves these files on the cache drive, instead of writing it to the array. At first I thought it was an error by me, because the path used was still set to "/mnt/cache", but after changing it to "/mnt/user", it still happened. Even when I removed the "Media" directory from the cache drive. When manually invoking the mover it doesn't move the files either oddly enough.
  4. Hi, I've installed the MotionEye Docker container and attached the USB webcam to it via the extra parameters via: --device=/dev/bus/usb/002/004 which shows up fine in the container. But MotionEye fails to detect it as a webcam. I've read around that a usb webcam is supposed to show up as an /dev/video* device, but it doesn't for me. Is there any way to get this working without using a VM? "lsusb" shows: Bus 002 Device 004: ID 046d:0823 Logitech, Inc. It's a Logitech C920 (I think). And it works totally fine on my Windows Desktop, so I can confirm it's not broken.
  5. Ah, I didn't realize they already counted towards the limit. Nevermind then.
  6. If the drive would be part of the array, either by extending it, or functioning as a cache drive, I would agree with you. However, I'm just using it as a separate drive to store my config files on because I want to prevent my HDD's spinning up unnecessarily. I feel like the device count only counts (and should stay that way) towards the array functionality, as that's what UnRaid is mainly all about. If this plugin would be integrated with unraid and they'd start counting towards the device limit, a lot of people of the community will get very unhappy. And reasonably so.
  7. I figured I'd post my results here since this is the first result in google to come up for this issue. If you have used "Community Applications" to install docker containers (ie: the Apps tab), it is safe to remove the docker image file. After you have deleted the docker image file and create a new one and start your docker service back up, you can head back to the Apps tab and click on the list icon in the top-left to go to previous installed apps as jonathanm explains. When these are installed, all settings as you had previously configured your containers will be restored. The only thing you may have to fix are custom made docker networks and properly configure the autostart for your containers, as these will all be turned on by default. Other then that, you lose absolutely nothing. EDIT: Worth noting, it looks like it doesn't use the "Post Arguments" when building the container. For these to be applied you will have to edit the container, and just press "apply" again.
  8. I created a post in the support thread for UD, referencing this post and asking if the same SMART attribute monitoring toggle config can be added. As long as it's a separate plugin it makes sense that this wouldn't be something Unraid could do anything about. Thanks for the help!
  9. Regarding: With the normal disks mounted within unraid itself you can enable/disable the monitoring of certain SMART values. The mounted disks via Unassigned Devices don't have this option, and seem to follow the default settings as set in "UnRaid: settings -> disk settings". Would it be possible to implement this same SMART attribute monitoring menu like in unraid where these can be toggled on/off ?
  10. Looks like that workaround will do for now! Of course a more proper way of doing this would be even better, but at the moment I'm no longer getting spammed warning notifications Thanks!
  11. I'll try that out, thanks. As long as it works, even as a workaround, I'll be happy. It's turning kind of into a situation where the server screams "wolf" at the moment
  12. Turns out, this doesn't work in my specific case, because I mounted the SSD using the "Unassigned Devices" plugin. When I click on the device name, it doesn't show the SMART attributes like the other drives do
  13. I would really like the ability to exclude devices from the notifications that unraid sends. For example, I have an SSD installed, and will (on every write operation) get emails about it having bad sectors, which I know, but don't prioritize as needing a replacement since it only contains config files and is working fine so far. The only option for me to stop unraid from sending me those emails right now however, is by disabling warnings entirely, which would prevent unraid from sending me "legitimate" warnings about my array disks. Having a dropdown list next to each notification entity where you could either whitelist, or blacklist, devices would really be a massive help in managing these notifications more specifically. Even though this might generally not be needed.
  14. So it turns out that, creating this network, caused Docker to add the following rules to the iptables on the host, preventing traffic. Chain DOCKER-ISOLATION-STAGE-1 (1 references) num target prot opt source destination 1 DROP all -- !10.10.0.0/16 anywhere 2 DROP all -- anywhere !10.10.0.0/16 Removing these 2 rules caused everything to work as intended. I presume these got added because of the "--internal" parameter upon network creation. Interestingly, you'd think that by removing these, the "--internal" parameter would be obsolete, but setting the gateway on the container, to that of the host, doesn't give the container internet access. Setting it back to the VPN container, does (via the VPN). EDIT: it looks like removing these rules did give containers in the default bridge network the ability to ping containers inside this "internal" vpn network though.. guess more research is needed. EDIT 2: Looks like all "--internal" does is add these iptables rules if I am to believe a serverfault user.
  15. I double checked just to be sure, it's set to privileged. The gateway is set correctly (and are set automatically using the post-parameter field executing a script). I confirmed this using "ip route" which shows the default gateway to be configured to 10.10.0.2
  16. I created an internal network using "docker network create --internal --subnet=10.10.0.0/16 vpn" with the goal to use my VPN docker container as gateway, and have all internet traffic of other containers in this network to flow through this gateway. Unfortunately for whatever reason the container just won't forward the traffic, and I'm not sure why. Tried many different tutorials, stackoverflow posts, etc. but none work. Even though when I used "--link" instead of the internal network bridge, it worked great. (I should add that I configured the correct gateway IP on the containers, and they can ping eachother just fine too). Are there any limitations to using the "--internal" parameter? Tried multiple iptables commands, different openvpn "redirect-gateway" options, but really at a loss right now. The openvpn container (gateway) has the following setup: eth0 - internal network - 10.10.0.2 eth1 - default bridge - 172.17.0.2 tun0 - vpn connection - <dynamic ip> and the other client only has: eth0 - internal network - 10.10.0.3 Tried using these iptables commands: iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT as well as using: echo 1 > /proc/sys/net/ipv4/ip_forward Which I confirmed by getting the output, to be sure this got set to "true". Right now, the containers can ping eachother, and the gateway container has access to the internet (confirmed via the vpn connection by using ipify.org). The other container is unable to ping 1.1.1.1 or 8.8.8.8 and can only ping 10.10.0.3. Using "ip route get 1.1.1.1" shows the route is set correctly, as it tries to contact the gateway container.
  17. Quite busy lately and forgot about this topic. It's still very much an issue and would really like some advice.
  18. I have setup my squid-proxy container to connect to my VPN by using the extra parameter: --net="container:vpn" In combination with Network-type: none, to make sure no data leaks to the public connection outside of the VPN. Which works absolutely great, but one issue I've run into is that if, for whatever reason, I need to restart my VPN container the connection won't be re-established between these containers. I'll have to make sure to restart the squid-proxy container after this as well. And if both are stopped, I need to make sure that the VPN container is started before the squid-proxy container is started in order for the connection to be established. Not being very experienced with docker, I'm wondering if there is a better way of doing this. Or perhaps a way of doing the same thing, where the connection will be automatically re-established if it detects that the container has started again.
  19. I'm running into the problem that all sent emails end up as "250 - Message queued". All ports are open and confirmed to be open using an online tool. When configuring SMTP relay to my local IP they get sent, but immediately return a Delivery Failure email (because it checks if the domain is configured on my own mailserver, instead of looking up what mailserver to send it to). Wasn't this supposed to be an all-in-one solution? Right now it seems like it's just completely incapable of sending emails and only capable of receiving them. So, for clarification, emails aren't being sent, seemingly not even an attempt to send them is being made. Not to be confused with emails getting rejected or ending up in SPAM (which isn't one of my concerns since it's for personal use only anyway). Tried checking log files, but they don't tell me anything useful either, only that the message got queued, but nothing after that. EDIT: Turns out my ISP is blocking the outgoing port 25 somewhere along the way. I didn't bother to check at first since I never ran into a similar issue before with my ISP so figured they weren't blocking anything.
  20. Too bad, was hoping there was maybe some way to configure it to change it's under-the-hood behaviour.
  21. I've set up shares for individual folders on my disk, such as "downloads" and "media", both of which use the cache drive to initially store files (and downloads only stay on the cache drive). However, when I want to move a file from "downloads" to "media", it starts copying the file as if it is on a separate disk. Requiring me to login to unraid and use the terminal to move the file, which happens instantly. Is this just a limitation of samba, or is there a config tweak that can be applied here to get the same result as I would have with the terminal? I'm using windows 10 on my PC, a client-side config tweak works for me too, would be happy to learn something new.
  22. He may have disabled it, but he's still not the original author of the project. Merely just someone trying to be. Just because the name happens to match doesn't mean this, or something similar, won't happen in the future. I'd like the change to SickChill as that's the software we were all originally using and is what we should keep using.
  23. Wow, I'm glad you guys found this out. Really not happy about this and will be disabling this container on my server for the time being.
  24. I'd like to see a feature where notifications can be customized to a certain extend. For example, using a template system where you can add placeholders and as such modify both the subject line, as well as the message contents. Currently my SSD is dying and I get spammed about unrecoverable sectors, but because I can't edit the subject line, and the number of sectors is included in this line, each and every single notification ends up as a new email. Preferrably I'd change this to something like "[Unraid] %Type% - %Status%" which would turn into "[Unraid] disk sdb - Error" (or something like that).