Jump to content

Bungy

Community Developer
  • Posts

    375
  • Joined

  • Last visited

Everything posted by Bungy

  1. I just updated the docker and I believe I *may* have fixed the issue. Can you update and let me know if it works for you?
  2. Potentially. I thought I added in all the commands to port the nzbget.conf file for the new docker. Can you send me your nzbget.conf file in a PM? Leave out any username/password info that you don't want to share.
  3. That looks pretty good. I could do a full edit just to make the text read a bit better if you like. You may want to explicitly tell the user what commands to try when migrating dockers before starting fresh. Your user base has the ability to grow quickly and may contain people without much true hands on Linux knowledge. It may also be important to tell users that you can't change the ABC user's uid or gid after the first time the docker is run without manually issuing the chown. Simply changing the environmental variables may cause the docker not to run.
  4. Sounds like a good idea. I think it's completely reasonable to force users to either set permissions themselves or to check that things are compatible before migration. It's a bit difficult to do when each docker uses slightly different directory structure for config so simply documenting the directory structure and required files may be a good start.
  5. It didn't work when I migrated from a different docker with different permissions to this one. I fixed it by manually fixing permissions so I guess this is only for migration cases.
  6. Hey Bungy Our base image sets chown to abc on /config, then we run all processes as abc so any new file would have the correct ownership. https://github.com/linuxserver/docker-baseimage/blob/master/10_add_user_abc.sh Is everything working how it should? I think the problem is here: chown abc:abc /config It doesn't recurse into the directory. I think it needs the -R flag.
  7. I just tried this image and it seems it doesn't automatically chown -Rv abc:users the /config directory.
  8. Yeah, this is definitely brand new terrotiroy for google! Does the /dev/ttyS0 or /dev/ttyUSB0 show up in unraid? If so, you may be able to pass those through directly to the openhab container instead of trying to pass the usb device. Otherwise my guess is the driver needs to be installed in the container and maybe the host.
  9. I have been through something similar to this with my mochad container. I pass through a usb device in order to send X10 commands. I do this by mounting the device "/dev/bus/usb/004" to "/dev/bus/usb/004" in the container. You can figure out which usb bus the device is plugged into by typing lsusb on the unraid terminal. The Bus number tells you which device to pass through. In my example, the X10 controller is on Bus 004 so I pass through /dev/bus/usb/004 To check if your device is passed through properly, enter the bash terminal for the openhab docker (docker exec -it openhab bash) and see if the device /dev/ttyS0 or /dev/ttyUSB0 show up (ls -l /dev)
  10. The other thing you can try is forwarding port 8000 and then accessing through http://WANIP:8000/owncloud The downside here is your traffic is routed through http instead of https and will therefore be unencrypted. Trying this may help to reveal any packet forwarding issues that you may have and should probably not be used in the long term.
  11. The default port mapping is for port 8443 being routed to the owncloud container and it looks like you're using port 8442. Have you double checked which port you're actually using? This may be the reason that the port forwarding from the router is not working. Double checked, ports forwarded correctly on router. I had something on 8443 already, which is why I had to swap it to 8442. I normally use an apache reverse proxy to access my owncloud remotely so that I don't have to open up a lot of different ports. I just tried forwarding port 8443 from my router to my unraid machine, and it worked perfectly. Accessing via https://WANIP:8443/owncloud got me right to the interface. I'm not quite sure what's wrong with your setup, but I'll think about it for a while.
  12. The default port mapping is for port 8443 being routed to the owncloud container and it looks like you're using port 8442. Have you double checked which port you're actually using? This may be the reason that the port forwarding from the router is not working.
  13. Dave, I just pushed an update to the docker template that should fix the links for the webui. If you have already created your container, go change the webui link to http://[iP]:[PORT:8000]/owncloud if you're not using https or https://[iP]:[PORT:8443]/owncloud if you are using https
  14. Sorry I couldn't help. I'm out of my element without having that hardware and knowing how it all works together.
  15. It looks like that configuration option is optional. Have you tried it without that config?
  16. Try running docker exec -it openhab ifconfig Grab the eth0 ip address and set the ip address to that number. This is a dirty hack since this ip will change when the container restarts, but it should at least tell you if that part of the config is the problem. That message you're getting occurs pretty frequently for me too (and many others). I, or others, have yet to see any adverse effects so I've been ignoring it for the time being.
  17. It looks like it's still trying to connect using the RPi's ip 2015-09-01 17:54:01.332 [ERROR] [b.k.i.connection.KNXConnection] - Error connecting to KNX bus: on connect to /192.168.178.100:3671: Invalid argument
  18. Ahhh i see. Try setting that IP to 127.0.0.1. I believe that it should be connecting to the openhab docker and not to the host (unraid).
  19. Hmm and it works in your pi with the same version of openhab and the same binding version?
  20. Hmm. Well it looks like you can connect to that machine. Have you double checked your knx binding configuration? Also, is there any way to get connection logs from the KNX side to see if it's a failed authorization or something like that? I have the link in the Readme markdown for the docker, but I definitely should put it along with the openhab template for unraid. I've been meaning to update my documentation, but time is short these days.
  21. So, I'm not familiar with the KNX gateway or the KNX binding, but my guess is that the docker is unable to connect to that ip address. Try running this command to see if your docker can ping the knx gateway. docker exec -it openhab ping 192.168.178.100 I'm guessing it got the ip address 192.168.178.100 from your openhab.cfg file. If openhab cannot ping that ip address, I would start looking into connectivity issues. Make sure your unraid can access that ip, check your firewall, check that the ip is correct, etc. As for #2, habmin is now installed by default. Simply point your browser to: http://tower:8080/habmin. Of course adjust the hostname and port to match your configuration. Let me know how it goes. I hope this helps you out.
  22. Thank you for replying. I deleted everything to start fresh. Couldn't remove /https binding at first configuration since the remove button was greyed out. After first config I removed the binding and also set the TARGET_DIR variable to / so that owncloud would show right away. Why would you want it to start as /owncloud? Then it worked! So it was the https binding. I ran into another problem though. After first time setup I choose to use the included mariadb. But now owncloud only displays an error that /usr/share/webapps/owncloud/data is readable for other users and I should set 0770 permissions? This is all new for me since I'm a windows user so once I get it up and running I'll also look at creating my own test certificate. There is one thing to keep in mind when running the docker without the /https volume mount. This will use the built in ssl certificates, which is a potential security risk. If somebody gets the certificates, they would be able to sniff the data transfer over the network. This is probably a small risk, but still a risk. The easy way around it is to create the server.crt and server key files yourself using the guide I posed earlier. To fix your current problem, run these commands on your unraid server assuming your owncloud volume mounts are stored in /mnt/cache/appdata/owncloud: chmod -Rv 0770 /mnt/cache/appdata/owncloud/data chown -Rv sshd:sshd /mnt/cache/appdata/owncloud/data In my opinion, the docker should set these permissions on bootup, but I'm not the creator of the docker. If I get time, I may submit a merge request that sets these permissions.
  23. From the log, it looks like your mysql container is starting correctly. If you want to get it to work with owncloud, start the container with enviornmental variables for: -e MYSQL_DATABASE=owncloud -e MYSQL_USER=owncloud -e MYSQL_PASSWORD=[i]password_for_owncloud_database[/i] Then, when you first initialize your owncloud container, it'll ask you what type of database you want to use. It defaults to sqlite, but you can change it to mysql. Provide it with the datbase user, name, and password that you set in your mysql docker using the environmental variables.
  24. @d4lions, @X1pheR For owncloud - Try removing the /https volume mapping. This mapping is required if you want to provide your own ssl certificates. If the directory does not contain server.crt and server.key files, the docker will not run. Alternatively, you can generate your own https ssl certificates - which is highly recommended. Using the built in certificates is a security risk. Here is a good link for how to generate generate the certificates. http://www.akadia.com/services/ssh_test_certificate.html
  25. Ashok, can you post your configuration? I'm also using mysql with my owncloud docker, so I should be able to help you out.
×
×
  • Create New...